[CS Fundamentals] Managing Deadlock in Operating Systems: Prevention and Detection Techniques
Definition & When it Happens
Deadlock is a situation in which two or more processes are unable to proceed because each is waiting for the other(s) to release resources, creating a cycle of dependency. In real-life, deadlock kinda seems like this situatoin:
Multiple processes might need the same resource. Let’s say there are Process A, Process B, Resource A and Resource B.
Process A holds Resource B, and needs Resource A
Process B holds Resource A, and needs Resource B
Neither can proceed because Process A is holding the resource that process B needs, and Process B is also holding the resource that process A needs.
Each is waiting on the other to release the resource it needs, resulting in a deadlock.
4 Key Conditions That Cause Deadlock (Coffman Conditions)
We call these four conditions “Coffman Conditions” in computer science.
Mutual Exclusion
- Only one process can use the resource at a time.
Hold and Wait
- There must be a process that is holding at least one resource and waiting to acquire additional resources held by other processes.
No Preemption
- Resources that are assigned to a process cannot be forcibly taken away.
Circular Wait
A set of processes are waiting for resources in such a way that there is a circular chain.
This diagram explains the circular wait situation.
If at least one of these four conditions is not satisifed, the deadlock won’t happen.
How to Handle Deadlock
Prevention
It is important to design a system in a such way that those conditions do not occur.
Eliminate one of Coffman conditions (High cost).
- For example, we can have multiple processes sharing the same resource, which solves mutual exclusion.
Avoidance
Banker’s Alogorithm
It is inspired by the way a bank might manage loan requests to avoid bankruptcy.
When a process requests a resource, check if the system remains in a safe state after allocating the resource.
- A system is in a safe state if it does not cause a deadlock and if there is a sequence of proccesses that can eventually finish, and is able to allow the next sequence to proceed.
Resource-Allocation Graph (RAG) Algorithm
- A graphical method for modeling resource allocation and detecting deadlock in a system.
Detection
You can detect a deadlock situation using resource-allocation graph (RAG) algorithm.
When resources are requested, use a detection algorithm and see if an overhead occurs.