Con Currency in
Con Currency in
Objectives
The aim of this tutorial is to provide a practical introduction to the
threading, concurrency control and asynchronous facilities of .NET
and how these can be used to create responsive mobile client
applications.
Prerequisites
You should have read the accompanying lecture notes on concurrency
and threading before attempting this tutorial.
Lab Setup
No special configuration is required to undertake these exercises.
Introduction
In this tutorial we will see how to explicitly and implicitly create and use
threads in.NET applications. We will also consider the various mechanisms
available for controlling concurrency, how these can be applied to ensure
the logical correctness of our computations and problems that can arise from
their use.
Threads
Every program has one at least one thread of control. Essentially a thread of
control (or thread for short) is a section of code that executes its statements
one after another, independently of other threads of control, within a single
program. Most of the programs you have written previously will have been
single-threaded programs. Such programs consist of a single thread of
control which starts at the first statement of the Main() method and
executes all subsequent statements one after another until the program
completes. However it is possible for a program to have multiple threads of
control which operate simultaneously. Such a program is said to be
multithreaded and has the following characteristics
• Each thread begins executing at a pre-defined location and executes
all subsequent code in an ordered sequence. When a statement
completes, the thread always executes the next statement in
sequence.
• Each thread executes its own code independently of the other threads
in the program. However, threads can cooperate with each other if
the programmer so chooses. We will examine various cooperation
methods in later sessions.
• All the threads appear to execute simultaneously due to the
multitasking implemented by the virtual machine. The degree of
simultaneity is affected by various factors including the priority of
the threads, the state of the threads and the scheduling scheme used
by the Common Language Runtime (CLR)1.
• All the threads execute in the same virtual address space; if two
threads access memory address 100 they will both be accessing the
same real memory address and the data it contains.
• All the threads have access to a global variables in a program but
may also define their own local variables.
1
The CLR uses the thread scheduling mechanism of the Windows operating system on
which it is running. The .NET Framework runs on Windows 32 whilst the .NET Compact
Framework runs on Windows CE. Although both of these operating systems are called
Windows, they are distinct operating systems and Windows CE uses an different
scheduling system. In some cases this may cause differences in threading behaviour in
managed code run on these two platforms.
Concurrency And Asynchronous Operation 3
Processes
Operating systems have traditionally implemented single-threaded
processes which have the following characteristics
• Each process begins executing at a pre-defined location (usually the
first statement of Main()) and executes all subsequent code in an
ordered sequence. When a statement completes, the process always
executes the next statement in sequence.
• All processes appear to execute simultaneously due to the
multitasking implemented by the operating system.
• Processes execute independently but may cooperate if the
programmer so chooses using the interprocess communication
mechanisms provided by the operating system.
• Each process executes in its own virtual address space; two
processes which access memory address 100 will not access the
same real memory address and will therefore not see the same data.
• All variables declared by a process are local to that process and not
available from other processes.
2
Although threads and processes appear to execute simultaneously, on a single CPU
machine they actually execute in turn in a round-robin fashion. Each thread is given a
timeslice, a short period of time during which it will execute on the CPU. When the
timeslice expires, the thread will be removed from the CPU and the scheduler will choose
another thread to execute. This operation is known as a context switch and takes a finite
amount of time. A context switch between threads imposes a lower overhead than a context
switch between processes.
4 Concurrency And Asynchronous Operation
Asynchronous Operation
Many of the operations we perform in the code that we write are
synchronous. Method calls are a good example of a synchronous operation.
When we call a method, control passes to the called method and does not
return to the caller until the method call completes. The caller is thus
blocked and cannot continue until the operation it initiated (in this case, the
method call) completes.
Asynchronous operations are also possible and are supported by many of the
classes in the .NET Framework and .NET Compact Framework libraries,
particularly those involving I/O and networking (such methods are easily
recognized as their names begin with Begin or End). In an asynchronous
operation, the caller initiates an operation and control immediately returns to
the caller which can then carry on with other processing. Meanwhile, the
operation it initiated executes in the background and will continue until it
completes. Note that in this mode of operation the caller is not blocked and
can carry on doing other things whilst the asynchronous operation is carried
out. This is extremely useful in client applications with a user interface,
such as those found in mobile applications. We want the user to be able to
initiate actions through the user interface, but we also want the interface and
application to remain responsive whilst those actions are performed. For
example, we may have a mobile email client application which needs to
retrieve the latest messages from the network. This is a slow operation and
whilst it proceeds, the user may want to do other things with the application,
such as compose new email messages.
3
IAsyncCallback is a sub-class of System.Delegate and is simply a function pointer to a
method which accepts a single parameter of type IAsyncResult
6 Concurrency And Asynchronous Operation
value (i.e. the service is being used to update data stored on the server). A
one-way method is specified in the Web service by tagging it with the
[OneWay] attribute. Any method tagged in this fashion will always be
invoked asynchronously by the client and there is no need to use the Begin /
End methods for the method to achieve asynchronous operation. Note that
the method must have a void return type as it is not possible to return a
value from a one-way method. One advantage of one-way calls is that
nothing is ever returned from the call to the client. If the call fails for some
reason no exception is passed back to the client. This is usually the desired
behaviour if the client does not understand the significance of the exception
or can do nothing to correct it.
using System;
namespace AsyncMailClientSample
{
public class MailClient
{
private MailServer svr;
public static void Main()
{
Threading in .NET
The .NET environment provides several ways for programmers to make use
of threads. Some of the classes in the .NET Framework automatically create
and use threads to implement their operations and by using these features
the programmer will implicitly make use of threading. All of the
asynchronous invocation features provided by .NET use implicit threading
and although the programmer does not need to do anything to create or
control the threads, special consideration may need to be given to how these
operations are used correctly in the application. The programmer can also
Concurrency And Asynchronous Operation 7
explicitly create
and control threads using the classes in the
System.Threading namespace.
The System.Threading namespace provides the programmer with all the
facilities necessary to create and manage threads within an application. The
most important types in this namespace are the Thread type and the
ThreadStart type. ThreadStart is a delegate type which is used to define the
method which will be used as the body4 of the thread. The Thread type
contains methods and properties which allow threads to be created,
destroyed and controlled. For our purposes the most important of these are
• Thread(ThreadStart t)
Constructs a new thread using the code specified by the given
ThreadStart delegate as the body of the thread.
• void Start()
Causes the operating system to change the state of the thread to
ThreadState.Running. Once a thread is in the Running state, the
operating system can schedule it for execution. Once the thread
terminates, it cannot be restarted with another call to Start().
• void Abort()
Raises a ThreadAbortException, which begins the process of
terminating the thread. Not supported by .NET Compact Framework
• static void Sleep(int millis)
Blocks the current thread for at least5 the specified number of
milliseconds. The thread does not use any CPU cycles whilst it is
blocked.
• void Suspend()
Prevents the thread from running for an indefinite amount of time. The
thread will start running again only after a call to Resume() is made.
Not supported by .NET Compact Framework
• void Resume()
Returns a suspended thread back to a running state. Not supported by
.NET Compact Framework
• int Priority
Gets or sets the scheduling priority of the thread, which affects the
relative order in which threads are chosen for execution by the
operating system scheduler.
• string Name
Gets or sets a textual name for a thread. This is useful when debugging
as the name of the thread appears in the debugger’s Threads window
• static Thread CurrentThread
4
The body of a thread is the sequence of statements that it will execute.
5
The Sleep() method does not guarantee that the thread will stop executing for exactly
the number of milliseconds specified, merely that it will be returned to the Running state
after this time. There may be another delay before the scheduler chooses to execute the
thread again.
8 Concurrency And Asynchronous Operation
Using Visual Studio, create a C# console application. Enter the code from
Figure 2 into this project. Build and run the project. Does it behave as you
expect? You should see the phrase Hello World appearing on screen before
the program terminates. It may be easier to observe this if you run the
compiled application from the command line rather than from Visual Studio.
using System;
using System.Threading;
namespace SimpleThreadSample1
{
class ThreadSample
{
Concurrency And Asynchronous Operation 9
Thread Priorities
One method of ordering the operation of threads is to prioritise the threads
and make some more important than others. The more important, or higher
priority threads, will run in preference to the less important, lower priority
threads.
The priority of a .NET thread can be set using the Thread.Priority
property and must be set to one of the constants from the
10 Concurrency And Asynchronous Operation
Modify your solution to the previous exercise so that the threads do not run
in an infinite loop but only cycle round each loop 100 times. Use the
Thread.Priority property to set the priority of the “Hello” thread to
AboveNormal and the priority of the “Again” thread to BelowNormal. The
priority of the World thread will remain at its default Normal priority.
Compile and run your code. What order do you expect the words to appear in
now? As the thread which prints “Hello” has the highest priority it should
print first.. “World” should be printed next as its thread has the next highest
priority and “Again” should appear last. Is this what happens? If not, can you
explain why not? (HINT: The .NET console class is designed so that it can only
be accessed by one thread at a time. Consider this question again when you
have attempted the exercises on locking later in this tutorial)
critical must attempt to acquire a lock before entering the critical section. If
several threads attempt to acquire a lock simultaneously only one will
succeed and enter the critical section. The remaining threads will be block
and will remain blocked until the successful thread releases the lock as it
leaves the critical section. At this point the blocked threads will be
reactivated and will compete again to acquire the lock.
The C# lock construct is used to protect a block of code. Any thread
attempting to enter this block of code must first attempt to acquire the lock
on the object specified by the lock statement. Once successful, the thread
can enter the associated code block, at which point it will be executing this
block of code under mutual exclusion. The code block protected by the
lock statement is thus a critical section. Figure 3 provides an outline
example that illustrates the use of the lock statement. Typically, the lock
statement uses the this pointer to specify that the calling thread must
obtain the lock on the current object, although any object can be used.
Sometimes it may be necessary to use nested lock statements if mutually
exclusive access to several objects is required to complete an operation.
public someMethod {
lock (this)
{
// Critical section – any thread executing the code in
// this block will do so under mutual exclusion
}
}
Figure 3: Using the lock construct to protect a critical section
We shall use the scenario of reading and updating the balance in a bank
account for the next few examples in this tutorial. Updating a bank balance
is an operation that must be performed under mutual exclusion if the current
balance of the account is to remain correct. Figure 4 provides a simple
Account class which has an Update() method that allows the account
balance to be modified.
12 Concurrency And Asynchronous Operation
Thread.Sleep(2000);
}
}
Figure 4: The Account class
public ReaderTask
{
string msg = "Thread" +Thread.CurrentThread.Name
+ ": Balance = ";
msg += acc.Balance;
Console.WriteLine(msg);
}
public WriterTask
{
acc.Update(10);
Thread.Sleep(5000);
}
tReaderA.Start();
tReaderB.Start();
tWriter.Start();
}
}
Figure 5: The AccountSample class
Concurrency And Asynchronous Operation 13
Using Visual Studio, create a new C# console application. Add the Account
class from Figure 4 to this project. Add the AccountSample class from Figure
5 to the project; this class contains two methods which are used as the bodies
of three threads created in the Main() method. Two of these threads
execute an infinite loop which simply displays the account balance whilst the
third thread executes a loop which continually updates the account balance.
Build and execute the application and observe the output. Does the account
balance increment correctly? Note how the Thread.Name property is used in
the DisplayBalance() method that forms the display thread bodies to
customize the message displayed by each thread, allowing us to use one
method to implement the bodies of several threads.
Too much mutual exclusion can be detrimental. One of the reasons for using
multiple threads is to maximize performance by allowing multiple
operations to proceed at the same time. Using mutual exclusion forces
operations to occur one after another, thus reducing the potential
concurrency. Often all that we really need is to ensure that operations which
update or write values are performed under mutual exclusion whilst
operations that simply read values may be performed simultaneously. This
cannot be achieved using the C# lock construct, but .the ReaderWriterLock
class provided by .NET can. Figure 6 shows an example of the usage of the
ReaderWriterLock class to create and protect critical sections. A
ReaderWriterLock uses two distinct locks; one for reading and one for
writing. A thread which wants to read a value must first acquire a reader
lock. Multiple Reader locks can be granted simultaneously, allowing
concurrent read operations to be performed. A thread wanting to write a
value must first acquire a writer lock. The writer lock will only be granted
when there are no outstanding read requests; when it is granted, the calling
thread is guaranteed mutually exclusive access. No reader locks will be
granted until the writing thread releases the writer lock. As with the lock
construct, any thread which cannot acquire the lock it requires is blocked
until it can successfully acquire the lock.
Monitors
A monitor is an alternative, more advanced concurrency control construct
provided by .NET. Whereas locks simply provide mutual exclusion, a
monitor can provide mutual exclusion and condition synchronization, thus
enabling us to impose a relative ordering on the activities of multiple
threads.
Consider the example of a buffer of a fixed capacity and two threads, a
Producer thread which inserts items in the buffer and a Consumer thread
which removes items from the buffer. The Producer must not attempt to
insert items into the buffer if it is full; the Consumer must not attempt to
remove items from the buffer if it is empty. We can use condition
synchronization to ensure that both of these criteria are fulfilled. The
condition synchronization also imposes a relative ordering on the activities
of the Produced and Consumer threads; we know that can only retrieve a
value from the buffer after the Producer thread has inserted a value into it.
.NET Monitors are implemented by the Monitor class. The Monitor.Enter()
method is used to enter the monitor; Monitor.Exit() is used to leave the
Concurrency And Asynchronous Operation 15
Now modify the Balance property so that it will only return a value when the
account balance is greater than zero; if the account balance is less than or
equal to zero the call to Balance should block until the account balance
exceeds zero. This can be achieved by using condition synchronization. The
critical section of the Balance property should be modified so that if the
account balance is less than zero, a call to Wait() is executed on the Account
object. The critical section of the Update() method should be modified so
that a call to Pulse() on the Account object is executed if the balance
after the update exceeds zero. Build and execute your code. Satisfy yourself
that the balance is being updated correctly and that the display thread only
display balance information when the balance is greater than zero.
16 Concurrency And Asynchronous Operation
class BoundedBuffer
{
// Dummy object for signalling space available
private object SpaceAvailable = new object();
// Dummy object for signalling items available
private object ItemAvailable = new object();
buffer[writePos] = obj;
writePos = writePos + 1 % capacity;
itemsInBuffer++;
Monitor.Enter(this);
if (itemsInBuffer == 0)
{
// Buffer empty, wait for items
Monitor.Wait(ItemAvailable);
}
val = buffer[readPos];
readPos = readPos + 1 % capacity;
itemsInBuffer--;
The user would like to be able to stop the message scrolling at any point or
change the text displayed by the label.
• Modify the event handler for the Start menu item, so that it creates
and starts a new thread which uses the ScrollMessage() method for
its thread body. Once the thread has been started the Start menu
item should be disabled and the Stop menu item should be enabled.
• Add an event handler for the Stop menu item, so that it stops the
thread which is modifying the position of the lblMessage label. Note
that the Compact Framework does not support the Thread.Abort()
method so you must find some other method of stopping the thread.
HINT: A thread will terminate when it reaches the end of the method
that defines its body. Try using a Boolean flag in the loop in
ScrollMessage method to control whether the loop should continue or
not.
The code you have added should enable the user to stop the label scrolling at
any point. Build and execute your code. Does it execute correctly?
You should find that your application runs as expected but it is not correct
as it violates the guidelines for the use of Windows Forms controls in the
presence of multiple threads. The thread which updates the position of
lblMessage is a different thread to that which created lblMessage and should
not directly operate on the label. To overcome this issue correctly and safely
we must use the Invoke() method of lblMessage, passing it a delegate to a
Concurrency And Asynchronous Operation 19
method which will update the position of the label for us and which will
execute in the context of the application thread which initially created the
label. In the .NET Framework this delegate can be a delegate to any type
which has a void parameter list, a MethodInvoker delegate instance or an
EventHandler delegate instance. In the .NET Compact Framework this
delegate must be an EventHandler delegate instance and the method to
which it refers must have a signature that matches that of the EventHandler
type.
Now add a Click event handler for the Change Message menu item. This
must change the message displayed by the scrolling label as soon as this
menu item is selected. Ideally the message should change to the next
message in a set of predefined messages each time the Change Message
menu item is selected. Because the Change Message event handler executes
on the application thread it can simply change the Text property of
lblMessage to the correct text for the new message.
An alternative approach, which is extremely useful when working with
threaded user interfaces, is to make use of the powerful data binding
facilities offered by .NET. Suppose we wish to change the behaviour of the
application so that the message displayed by the scrolling label
automatically changes every 2 seconds. In the Start menu item Click event
handler we could create another thread which simply sleeps for 10 seconds
then changes message. This thread cannot directly alter the lblMessage.Text
property as this violates the threading guidelines. Instead of using the
lblMessage.Invoke() method as before, we can use data binding, as shown
in the following exercise.
Now add the class shown Figure 9 to your project. Add a member variable of
this type called msgData to the form class. This class simply stores a set of
messages as text strings and has a Text property which returns a message
string. The NextMessage() method causes subsequent calls to the Text
property to return the next message in the set of stored messages. The code
added to the constructor simply creates an instance of this class and binds its
Text property to the Text property of the lblMessage label.
In the Click event handler of the Start menu item create a new thread which
simply iterates through an infinite loop controlled by a Boolean flag. Inside the
loop Thread.Sleep should be used to delay for 2 seconds before calling
msgData.NextMessage() to cause the message to change to the next in the
set of messages.
Modify the Click event handler for the Change Message menu item so that it
calls msgData.NextMessage() to change the message.
Build and execute your code and test that it behaves as you expect.
If you examine the code of the MessageData class you will see that it
defines an EventHandler member called TextChanged. This is required by
the Windows Forms data binding mechanism. When a property of an object
is bound to a control, the data binding mechanism looks for an
EventHandler with the same name as the property plus Changed (in our
case, TextChanged). If such an event handler exists then the data binding
mechanism binds this event handler to the control; this causes the control to
be updated using the value from the bound property whenever the event is
raised. The private OnTextChanged() method is used to raise the event
when the message changes. Note also that the NextMessage() method uses
locking to prevent the ChangeMessage thread and the user from trying to
call NextMessage() simultaneously.
Concurrency And Asynchronous Operation 21
public MessageData()
{
_msgData.Add("Hello World");
_msgData.Add("Bye World");
_msgData.Add("Buy Fish");
}
blocks but the rest of the program can carry on performing useful
work. .NET provides asynchronous (nonblocking) I/O operations to
the programmer and internally these are implemented using separate
threads. If you use an asynchronous operation in a .NET program
then you are implicitly making use of threads.
• Independent Tasks
Sometimes a program is required to simultaneously perform
independent tasks. For example, a Web server may simultaneously
service independent requests from several clients, or a computer
control system may need to manipulate several control parameters at
the same time. Although a single-threaded program could be written
to implement such systems it is often easier to consider the
concurrency in the problem and use a separate thread for each
independent task.
Concurrency Problems
The use of threading is not without its problems and it is normally harder to
write and debug multithreaded programs. Consider the pseudo-code
example in Figure 10. When this program is executed both Thread A and
Thread B will enter an infinite loop. Thread A waits for FlagB to become
false, which it never does, whilst Thread B similarly waits for FlagA to
become false. The problem occurs because each thread waits on variables
manipulated by the other thread. This sharing of variables creates inter-
dependence between the threads which results in both threads entering a
state from which neither can proceed. This situation is known as livelock
and it is one of the problems that can occur in an application which uses
concurrency. Although this is a trivial example, it is all too easy to achieve
livelock in a program that uses many threads.
In livelock the threads cannot make progress but are still scheduled and
perform some execution on the CPU. Deadlock is a similar condition in
which threads cannot make progress and are blocked, thus preventing them
from executing on the CPU. Deadlock often occurs when multiple locks are
required before an action can be performed. If more than one critical section
requires this same set of locks, but the set of locks are acquired in a different
Concurrency And Asynchronous Operation 23
order then deadlock can result. This is illustrated in Figure ?. Deadlock can
be avoided by always acquiring and releasing locks in the same order and
for only holding a lock for the absolute minimum amount of time required.
FlagA = false;
FlagB = false;
Thread A
{
while(FlagB == false) {
// do nothing
}
// Do something useful
}
Thread B
{
while(FlagA == false) {
// do nothing
}
// Do something useful
}
Figure 10: Livelock in a multithreaded application
object A;
object B;
Thread A
{
lock(A){
lock(B) {
// do something
}
}
}
Thread B
{
lock(B){
lock(A) {
// do something
}
}
}
Figure 11: Deadlock in a multithreaded application