Java Multi-Threading Evolution and Topics
Java Multi-Threading Evolution and Topics
One of our reader, Anant, asked this extremely good question to elaborate / list down
all related topics that we should know about multi-threading including changes made in
java 8.( Beginner level to Advance level). All he wanted to know was evolution of Multi-
threading Framework in Java from Simple Runnable interface to latest feature in Java 8.
Let us solve his query.
I spent good amount of time in collecting all below information. So please feel free to
suggest edit/update in below information if you think otherwise on any point.
java.lang.Thread
java.lang.ThreadGroup
java.lang.Runnable
java.lang.Process
java.lang.ThreadDeath
and some exception classes
e.g.
1. java.lang.IllegalMonitorStateException
2. java.lang.IllegalStateException
3. java.lang.IllegalThreadStateException.
JDK 1.2 and JDK 1.3 had no noticeable changes related to multi-threading. (Correct me
if I have missed anything).
JDK 1.4, there were few JVM level changes to suspend/resume multiple threads with
single call. But no major API changes were present.
JDK 1.5 was first big release after JDK 1.x; and it had included multiple concurrency
utilities. Executor, semaphore, mutex, barrier, latches, concurrent
collections and blocking queues; all were included in this release itself. The biggest
change in java multi-threading applications cloud happened in this release.
JDK 1.6 was more of platform fixes than API upgrades. So new change was present in
JDK 1.6.
JDK 1.8 is largely known for Lambda changes, but it also had few concurrency changes
as well. Two new interfaces and four new classes were added
in java.util.concurrent package e.g. CompletableFuture and CompletionException.
The Collections Framework has undergone a major revision in Java 8 to add aggregate
operations based on the newly added streams facility and lambda expressions;
resulting in large number of methods added in almost all Collection classes, and thus in
concurrent collections as well.
References:
https://www.cs.princeton.edu/courses/archive/fall97/cs461/jdkdocs/relnotes/intro.
html
http://programmers.stackexchange.com/questions/147205/what-were-the-core-
api-packages-of-java-1-0
http://docs.oracle.com/javase/1.5.0/docs/guide/concurrency/overview.html
http://docs.oracle.com/javase/7/docs/technotes/guides/concurrency/changes7.ht
ml
http://www.oracle.com/technetwork/java/javase/jdk7-relnotes-418459.html
http://docs.oracle.com/javase/8/docs/technotes/guides/concurrency/changes8.ht
ml
I hope that above listings will help you understanding the multi-threading features
JDK version wise.
Happy Learning !!
Java Concurrency – Thread Safety?
By Lokesh Gupta | Filed Under: Multi Threading
Defining thread safety is surprisingly tricky. A quick Google search turns up numerous
“definitions” like these:
1. Thread-safe code is code that will work even if many Threads are executing it
simultaneously.
2. A piece of code is thread-safe if it only manipulates shared data structures in a
manner that guarantees safe execution by multiple threads at the same time.
Don’t you think that definitions like above actually does not communicate anything
meaningful and even add some more confusion. Though these definitions can’t be ruled
out just like that, because they are not wrong. But the fact is they do not provide any
practical help or perspective. How do we make a difference between a thread-safe
class and an unsafe one? What do we even mean by “safe”?
You will agree that a good class specification will have all information about a class’s
state at any given time and it’s post condition if some operation is performed on it.
Since we often don’t write adequate specifications for our classes, how can we possibly
know they are correct? We can’t, but that doesn’t stop us from using them anyway once
we’ve convinced ourselves that “the code works”. This “code confidence” is about as
close as many of us get to correctness.
If the loose use of “correctness” here bothers you, you may prefer to think of a thread-
safe class as one that is no more broken in a concurrent environment than in a
single-threaded environment. Thread-safe classes encapsulate any needed
synchronization so that clients need not provide their own.
The transient state for a particular computation exists solely in local variables that are
stored on the thread’s stack and are accessible only to the executing thread. One thread
accessing a StatelessFactorizer cannot influence the result of another thread accessing
the same StatelessFactorizer; because the two threads do not share state, it is as if
they were accessing different instances. Since the actions of a thread accessing a
stateless object cannot affect the correctness of operations in other threads, stateless
objects are thread-safe.
That’s all for this small but important concept around What is Thread Safety?
Happy Learning !!
Concurrency vs. Parallelism
By Lokesh Gupta | Filed Under: Multi Threading
Concurrency means multiple tasks which start, run, and complete in overlapping time
periods, in no specific order. Parallelism is when multiple tasks OR several part of a
unique task literally run at the same time, e.g. on a multi-core processor. Remember that
Concurrency and parallelism are NOT the same thing.
Let’s understand more in detail that what I mean when I say Concurrency vs.
Parallelism.
Concurrency
Concurrency is essentially applicable when we talk about minimum two tasks or more.
When an application is capable of executing two tasks virtually at same time, we call it
concurrent application. Though here tasks run looks like simultaneously, but essentially
they MAY not. They take advantage of CPU time-slicing feature of operating system
where each task run part of its task and then go to waiting state. When first task is in
waiting state, CPU is assigned to second task to complete it’s part of task.
Operating system based on priority of tasks, thus, assigns CPU and other computing
resources e.g. memory; turn by turn to all tasks and give them chance to complete. To
end user, it seems that all tasks are running in parallel. This is called concurrency.
Parallelism
Parallelism does not require two tasks to exist. It literally physically run parts of tasks OR
multiple tasks, at the same time using multi-core infrastructure of CPU, by assigning one
core to each task or sub-task.
Parallelism requires hardware with multiple processing units, essentially. In single core
CPU, you may get concurrency but NOT parallelism.
Concurrency is about dealing with lots of things at once. Parallelism is about doing
lots of things at once.
An application can be concurrent – but not parallel, which means that it processes more
than one task at the same time, but no two tasks are executing at same time instant.
An application can be parallel – but not concurrent, which means that it processes
multiple sub-tasks of a task in multi-core CPU at same time.
An application can be neither parallel – nor concurrent, which means that it processes all
tasks one at a time, sequentially.
An application can be both parallel – and concurrent, which means that it processes
multiple tasks concurrently in multi-core CPU at same time .
One of the best additions in java 5 was Atomic operations supported in classes such
as AtomicInteger, AtomicLong etc. These classes help you in minimizing the need of
complex (un-necessary) multi-threading code for some basic operations such as
increment or decrement a value which is shared among multiple threads. These classes
internally rely on an algorithm named CAS (compare and swap). In this article, I am
going to discuss this concept in detail.
It’s much like saying “please close the door first; otherwise some other crook will come
in and rearrange your stuff”.
Though above approach is safe and it does work, but it put a significant penalty on
your application in terms of performance. Reason is simple that waiting threads can
not do anything unless they also get a chance and perform the guarded operation.
There exist one more approach which is more efficient in performance, and
it optimistic in nature. In this approach, you proceed with an update, being hopeful
that you can complete it without interference. This approach relies on collision
detection to determine if there has been interference from other parties during the
update, in which case the operation fails and can be retried (or not).
The optimistic approach is like the old saying, “It is easier to obtain forgiveness than
permission”, where “easier” here means “more efficient”.
Compare and Swap is a good example of such optimistic approach, which we are going
to discuss next.
CAS says “I think V should have the value A; if it does, put B there, otherwise don’t
change it but tell me I was wrong.” CAS is an optimistic technique—it proceeds with the
update in the hope of success, and can detect failure if another thread has updated the
variable since it was last examined.
1) Thread 1 and 2 want to increment it, they both read the value and increment it
to 11.
V = 10, A = 0, B = 0
2) Now thread 1 comes first and compare V with it’s last read value:
V = 10, A = 10, B = 11
if A=V
V=B
else
operation failed
return V
Clearly the value of V will be overwritten as 11, i.e. operation was successful.
V = 11, A = 10, B = 11
if A=V
V=B
else
operation failed
return V
4) In this case, V is not equal to A, so value is not replaced and current value of V
i.e. 11 is returned. Now thread 2, again retry this operation with values:
V = 11, A = 11, B = 12
And this time, condition is met and incremented value 12 is returned to thread 2.
In summary, when multiple threads attempt to update the same variable simultaneously
using CAS, one wins and updates the variable’s value, and the rest lose. But the losers
are not punished by suspension of thread. They are free to retry the operation or simply
do nothing.
Thats all for this simple but important concept related to atomic operations supported
in java.
Happy Learning !!
a code block
a method
1.1. Syntax
The general syntax for writing a synchronized block is as follows. Here lockObject is a
reference to an object whose lock associates with the monitor that the synchronized
statements represent.
synchronized( lockObject )
{
// synchronized statements
}
In this way, synchronized keyword guarantees that only one thread will be executing the
synchronized block statements at a time, and thus prevent multiple threads from
corrupting the shared data inside the block.
Keep in mind that if a thread is put on sleep (using sleep() method) then it does not
release the lock. At this sleeping time, no thread will be executing the synchronized
block statements.
I have created two threads which start executing the printNumbers() method exactly at
same time. Due to block being synchronized, only one thread is allowed access and
other thread has to wait until first thread is finished.
//first thread
Runnable r = new Runnable()
{
public void run()
{
try {
mathClass.printNumbers(3);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
Program output.
ONE :: 1
ONE :: 2
ONE :: 3
TWO :: 1
TWO :: 2
TWO :: 3
Program output.
ONE :: 1
ONE :: 2
ONE :: 3
TWO :: 1
TWO :: 2
TWO :: 3
Happy Learning !!
In Java, a synchronized block of code can only be executed by one thread at a time.
Also, java supports multiple threads to be executed concurrently. This may cause two or
more threads to access the same fields or objects at same time.
Please note that we can use synchronized keyword in the class on defined methods or
blocks. synchronized keyword can not be used with variables or attributes in class
definition.
or
or
}
}
or
or
Let me know thoughts and queries on Object level lock vs Class level lock in Java.
Happy Learning !!
In java language, as we all know that there are two ways to create threads. One
using Runnableinterface and another by extending Thread class. Let’s identify the
differences between both ways i.e extends thread and implements runnable.
http://bugs.java.com/bugdatabase/view_bug.do;jsessionid=5869e03fee226ffffffffc
40d4fa881a86e3:WuuT?bug_id=4533087
It’s fixed in Java 1.5 but Sun doesn’t intend to fix it in 1.4.
That’s all about differences between Runnable interface and Thread class in java. If
you know something more, please put that in comments section and I will include in
post content.
Happy Learning !!
You may have faced this question in your interview that what is the difference between
lock and a monitor? Well, to answer this question you must have good amount of
understanding of how java multi-threading works under the hood.
Short answer, locks provide necessary support for implementing monitors. Long answer
read below.
Locks
A lock is kind of data which is logically part of an object’s header on the heap
memory. Each object in a JVM has this lock (or mutex) that any program can use to
coordinate multi-threaded access to the object. If any thread want to access instance
variables of that object; then thread must “own” the object’s lock (set some flag in lock
memory area). All other threads that attempt to access the object’s variables have to
wait until the owning thread releases the object’s lock (unset the flag).
Once a thread owns a lock, it can request the same lock again multiple times, but then
has to release the lock the same number of times before it is made available to other
threads. If a thread requests a lock three times, for example, that thread will continue to
own the lock until it has “released” it three times.
Please note that lock is acquired by a thread, when it explicitly ask for it. In Java, this is
done with the synchronized keyword, or with wait and notify.
Monitors
Monitor is a synchronization construct that allows threads to have both mutual
exclusion (using locks) and cooperation i.e. the ability to make threads wait for
certain condition to be true (using wait-set).
In other words, along with data that implements a lock, every Java object is logically
associated with data that implements a wait-set. Whereas locks help threads to work
independently on shared data without interfering with one another, wait-sets help
threads to cooperate with one another to work together towards a common goal e.g. all
waiting threads will be moved to this wait-set and all will be notified once lock is
released. This wait-set helps in building monitors with additional help of lock
(mutex).
Mutual exclusion
Putting in very simple words, a monitor is like a building that contains one special room
(object instance) that can be occupied by only one thread at a time. The room usually
contains some data which needs to be protected from concurrent access. From the time
a thread enters this room to the time it leaves, it has exclusive access to any data in the
room. Entering the monitor building is called “entering the monitor.” Entering the
special room inside the building is called “acquiring the monitor.” Occupying the room
is called “owning the monitor,” and leaving the room is called “releasing the monitor.”
Leaving the entire building is called “exiting the monitor.”
When a thread arrives to access protected data (enter the special room), it is first put in
queue in building reception (entry-set). If no other thread is waiting (own the monitor),
the thread acquires the lock and continues executing the protected code. When the
thread finishes execution, it release the lock and exits the building (exiting the monitor).
If when a thread arrives and another thread is already owning the monitor, it must wait
in reception queue (entry-set). When the current owner exits the monitor, the newly
arrived thread must compete with any other threads also waiting in the entry-set. Only
one thread will win the competition and own the lock.
Cooperation
In general, mutual exclusion is important only when multiple threads are sharing data or
some other resource. If two threads are not working with any common data or resource,
they usually can’t interfere with each other and needn’t execute in a mutually exclusive
way. Whereas mutual exclusion helps keep threads from interfering with one another
while sharing data, cooperation helps threads to work together towards some common
goal.
This cooperation requires both i.e. entry-set and wait-set. Below given diagram will
help you in understand this cooperation.
Above figure shows the monitor as three rectangles. In the center, a large rectangle
contains a single thread, the monitor’s owner. On the left, a small rectangle contains the
entry set. On the right, another small rectangle contains the wait set.
I hope that above discussion will help you in getting more insight. Free free to ask any
question.
Happy Learning !!
One of the benefits of the Java executor framework is that we can run concurrent tasks
that may return a single result after processing the tasks. The Java Concurrency API
achieves this with the following two interfaces Callable and Future.
1.1. Callable
Callable interface has the call() method. In this method, we have to implement the
logic of a task. The Callable interface is a parameterized interface, meaning we have to
indicate the type of data the call() method will return.
2.2. Future
Future interface has methods to obtain the result generated by a Callable object and to
manage its state.
FactorialCalculator.java
public class FactorialCalculator implements Callable<Integer>
{
@Override
public Integer call() throws Exception {
int result = 1;
if ((number == 0) || (number == 1)) {
result = 1;
} else {
for (int i = 2; i <= number; i++) {
result *= i;
TimeUnit.MILLISECONDS.sleep(20);
}
}
System.out.println("Result for number - " + number + " -> " + result);
return result;
}
}
Now let’s test the above factorial calculator using two threads and 4 numbers.
CallableExample.java
package com.howtodoinjava.demo.multithreading;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
Program output.
Console
Result for number - 4 -> 24
Result for number - 6 -> 720
Future result is - - 720; And Task done is true
Future result is - - 24; And Task done is true
Result for number - 2 -> 2
Result for number - 6 -> 720
Future result is - - 720; And Task done is true
Future result is - - 2; And Task done is true
1. We can control the status of the task – we can cancel the task and check if it has
finished. For this purpose, we have used the isDone() method to check if the tasks
had finished.
2. We can get the result returned by the call() method. For this purpose, we have
used the get()method. This method waits until the Callable object has finished
the execution of the call()method and has returned its result.
If the thread is interrupted while the get() method is waiting for the result, it
throws an InterruptedException exception. If the call() method throws an
exception, this method throws an ExecutionException exception.
Happy Learning !!
As we are already aware that there are two kinds of exceptions in Java. Checked
exceptions and Unchecked exceptions. Checked exceptions must be specified in the
throws clause of a method or caught inside them. Unchecked exceptions don’t have to
be specified or caught. When a checked exception is thrown inside the run() method of
a Thread object, we have to catch and treat it accordingly, because the run() method
doesn’t accept a throws clause. But when an unchecked exception is thrown inside
the run() method of a Thread object, the default behavior is to write the stack trace in
the console (or log it inside error log file) and exit the program.
Fortunately, Java provides us with a mechanism to catch and treat the unchecked
exceptions thrown in a Thread object to avoid the program ending. This can be done
using UncaughtExceptionHandler.
Task.java
@Override
System.out.println(Integer.parseInt("123"));
System.out.println(Integer.parseInt("234"));
System.out.println(Integer.parseInt("345"));
System.out.println(Integer.parseInt("456"));
}
}
DemoThreadExample.java
thread.start();
123
234
345
Exception in thread "Thread-0" java.lang.NumberFormatException: For input string: "XYZ"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at examples.algorithms.sleepingbarber.Task.run(DemoThreadExample.java:24)
at java.lang.Thread.run(Unknown Source)
ExceptionHandler.java
{
System.out.printf("An exception has been captured\n");
e.printStackTrace(System.out);
@Override
Thread.currentThread().setUncaughtExceptionHandler(new ExceptionHandler());
System.out.println(Integer.parseInt("123"));
System.out.println(Integer.parseInt("234"));
System.out.println(Integer.parseInt("345"));
System.out.println(Integer.parseInt("456"));
Now run the above example once again. This will run continuously. In real life, if this task
is able to complete it’s task then it will exit without throwing any exception and will
complete it’s life cycle.
123
234
345
An exception has been captured
Thread: 1394
Exception: java.lang.NumberFormatException: For input string: "XYZ"
Stack Trace:
java.lang.NumberFormatException: For input string: "XYZ"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at examples.algorithms.sleepingbarber.Task.run(DemoThreadExample.java:24)
at java.lang.Thread.run(Unknown Source)
Thread status: RUNNABLE
123
234
345
An exception has been captured
Thread: 1395
Exception: java.lang.NumberFormatException: For input string: "XYZ"
Stack Trace:
java.lang.NumberFormatException: For input string: "XYZ"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at examples.algorithms.sleepingbarber.Task.run(DemoThreadExample.java:24)
at java.lang.Thread.run(Unknown Source)
Thread status: RUNNABLE
123
234
345
Above implementation helps you to run a thread in a way such that it will run until it’s task is
done. This can be achieved though other multi-threading concepts as well.
Please note that UncaughtExceptionHandler can be used for making logging more
robust only as well without restarting the thread, because often default logs
doesn’t provide enough information about the context when thread execution
failed.
Happy Learning !!
If you may know that in web-servers you can configure the maximum number of
concurrent connections to the server. If more connections than this limit come to server,
they have to wait until some other connections are freed or closed. This limitation can
be taken as throttling. Throttling is the capability of regulating the rate of input for a
system where output rate is slower than input. It is necessary to stop the system from
crashing or resource exhaustion.
Our approach was good enough already and capable of handling most of the practical
scenarios. Now let’s add one more concept into it which may prove beneficial in some
conditions. This concept is around throttling of task submission in queue.
In this example, throttling will help in keeping the number of tasks in queue in limit so that no
task get rejected. It essentially removes the necessity of RejectedExecutionHandler as well.
DemoTask.java
this.name = name;
@Override
try {
Thread.sleep(1000);
e.printStackTrace();
CustomThreadPoolExecutor.java
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
@Override
{
super.beforeExecute(t, r);
@Override
super.afterExecute(r, t);
if (t != null)
t.printStackTrace();
DemoExecutor.java
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
Integer threadCounter = 0;
executor.setRejectedExecutionHandler(new RejectedExecutionHandler()
{
@Override
try
Thread.sleep(1000);
} catch (InterruptedException e)
e.printStackTrace();
executor.execute(r);
});
executor.prestartAllCoreThreads();
while (true)
threadCounter++;
executor.execute(new DemoTask(threadCounter.toString()));
if (threadCounter == 1000)
break;
If we run the above program, we will get the output like below:
DemoTask Rejected : 71
Executing : 3
Executing : 5
...
...
There will be multiple occurrences of “DemoTask Rejected“. In next solution, we will put
throttle technique so that no task should be rejected.
package threadpoolDemo;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.Semaphore;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
@Override
super.beforeExecute(t, r);
@Override
do
try
semaphore.acquire();
acquired = true;
} while (!acquired);
try
super.execute(task);
} catch (final RejectedExecutionException e)
System.out.println("Task Rejected");
semaphore.release();
throw e;
@Override
super.afterExecute(r, t);
if (t != null)
t.printStackTrace();
semaphore.release();
package threadpoolDemo;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
Integer threadCounter = 0;
executor.setRejectedExecutionHandler(new RejectedExecutionHandler()
@Override
try
Thread.sleep(1000);
} catch (InterruptedException e)
e.printStackTrace();
executor.execute(r);
});
executor.prestartAllCoreThreads();
while (true)
threadCounter++;
if (threadCounter == 1000)
break;
Java inter-thread communication has been a popular interview question for a long time.
With the JDK 1.5 release, ExecutorService and BlockingQueue brought another way of
doing it more effectively, but piped stream approach is also worth knowing and might
be useful in certain scenarios.
Table of contents
They come out in FIFO order, first-in first-out, just like from real plumbing pipes.
PipedReader and PipedWriter
PipedReader is an extension of Reader class which is used for reading character
streams. Its read() method reads the connected PipedWriter’s stream.
Similarly, PipedWriter is an extension of Writer class and does all the things which
Reader class contracts.
Once connected through any of above ways, any thread can write data in stream
using write(....)methods, and data will be available to reader and can be read
using read() method.
import java.io.*;
public PipedCommunicationTest()
{
try
{
// Create writer and reader instances
PipedReader pr = new PipedReader();
PipedWriter pw = new PipedWriter();
}
catch (Exception e)
{
System.out.println("PipeThread Exception: " + e);
}
}
}
Program Output:
Summary
You cannot write to a pipe without having some sort of reader created and
connected to it. In other words, both ends must be present and already connected
for the writing end to work.
You cannot switch to another reader, to which the pipe was not originally
connected, once you are done writing to a pipe.
You cannot read back from the pipe if you close the reader. You can close the
writing end successfully, however, and still read from the pipe.
You cannot read back from the pipe if the thread which wrote to it ends.
In my previous post, I written about auto reload of configuration when any change
happen in property files, I discussed about refreshing your application configuration
using Java WatchService. As configurations are shared resources and when accessing
via Threads, there is always chance of writing incorrect code which can cause in
deadlock situation.
1. Deadlock
In Java, a deadlock is a situation where minimum two threads are holding the lock on
some different resource, and both are waiting for other’s resource to complete its task.
And, none is able to leave the lock on the resource it is holding.
Deadlock Scenario
In above case, Thread-1 has A but need B to complete processing and similarly Thread-
2 has resource Bbut need A first.
ResolveDeadLockTest.java
package thread;
// Thread-1
synchronized (a) {
try {
// lock resources
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
synchronized (b) {
};
// Thread-2
synchronized (b) {
synchronized (a) {
}
}
};
new Thread(block1).start();
new Thread(block2).start();
// Resource A
private class A {
return i;
this.i = i;
// Resource B
private class B {
return i;
Running above code will result in a deadlock for very obvious reasons (explained above).
Now we have to solve this issue.
ResolveDeadLockTest.java
// Thread-1
synchronized (b) {
try {
// lock resources
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
synchronized (a) {
};
// Thread-2
synchronized (b) {
synchronized (a) {
};
Run again above class, and you will not see any deadlock kind of situation. I hope, it will
help you in avoiding deadlocks, and if encountered, in resolving them.