Being threadsafe - an introduction to the pitfalls of parallelism
Written by Mike James   
Tuesday, 06 July 2010
Article Index
Being threadsafe - an introduction to the pitfalls of parallelism
Making a thread safe
Concurrent exectution and exclusion
Locking in practice
Deadlock

If you want to make your programs go faster in the future you have to take up the challenge of threading. In this article we look at the basics of writing threadsafe code and why things go wrong.

 

You are probably coming under pressure to make your applications more responsive and to make them multi-core aware.

The solution is, of course, very simple – just use threads.

The downside is that with the solution comes a problem. Most of the code you will encounter isn’t “threadsafe” and this includes most of the objects in the .NET class library and of course your own code.

We often use the term threadsafe without really bothering to define what it actually means – as if using it often enough would make its meaning obvious.

Basically a block of code, or an object, is threadsafe if it works correctly and as desired if multiple threads make use of it at the same time.You can even define two grades of threadsafenes according to whether you are considering real concurent execution or the one thread at a time access of multitasking. In most cases the distinction makes little difference. However it is worth noting that todays multi-core and multi-processor machines make it increasingly likely that your code will be executing simulatniously on more than one core i.e. true parallelism.

Now the important thing to realise is that most code isn’t threadsafe unless you take steps to make it so.Most code isn't threadsafe by its very nature.

What exactly is the problem?

There is the obvious confusion caused by threads sharing the same data. For example, if a method has a counter then another thread starting the same method might well zero that counter and leave it in a state that is not as the first thread left it.

For example, consider the following pair of methods:

public int count;
public void A()
{
for (int i = 0; i < 9999999; i++)
{
count++;
}
}

public void B()
{
count = 0;
}

There is nothing strange or ambigous about what they do and their outputs are completely predictable from the text of the code. The first method increments count 9,999,999 times and hence count is 9999999 when the method ends. The second method sets count to zero and hence count is zero when the method ends.

Things are quite different if we now put the two methods into a slightly different context - a multithreaded context:

count = 0;
Thread T1 = new Thread(A);
Thread T2 = new Thread(B);

T1.Start();
T2.Start();

T1.Join();
T2.Join();

textBox1.Text = count.ToString();

Where it is assumed that a

using System.Threading;

is enforce in this and all subsequent examples.

In this case method A adds one to count in a loop and method B zeros the same variable as before. These are run as two independent threads and the main thread waits until they have completed using the Thread.Join method. 

If you run this program then you will discover that the final value of count isn’t predictable because it all depends on when the two threads get access to count. Sometimes you will see 9999999 in the textbox other times a lower value depending on where method B zeroed the count within the for loop of method A.

Is this threadsafe coding?

In a sense it might be if the idea is to keep a count since the last zeroing by function B. There is a sense in which "threadsafe" has to be interpreted in the light of what the code is actually intended to do.

In a deeper sense however it isn’t threadsafe because access to the global resource, i.e. count, isn’t controlled.

This means that function A could be interrupted in the middle of the act of incrementing count with the result that an erroneous value is stored back in count when its thread resumes.

In this case the incrementation is such a fast operation that the chance of being interrupted is low but we can change this by stretching out the operation a little:

public void A()
{
for (int i = 0; i < 9999999; i++)
{
int temp=count+1;
if (temp != count + 1)
{
return;
}
count = temp;
}

In this case a temporary variable is used to store the increment and an if statement checks that logically the incremented value is one more than the old count value.There is a sense in which the if statement plays the role of an assertion that temp should equal count plus one if the program is working properly - and you can see that just reading the code through makes it inconceivable that it could be otherwise. However multithreading makes this obvious assertion not so obvious.

If you run this code with a breakpoint on the return instruction then you should find that about one in six runs results in temp and count+1 being different.

This is the sort of problem that threadsafe code is designed to avoid.You should also now be in a state of mind where you not only see that such threading errors are theoretically possible but you should regard them as fairly likely!

Notice that as this sort of threading error can have a low probability of occurring it’s possible for it to go unnoticed for many, many runs of the program and look to all intents and purposes like some sort of random hardware failure.

It is this that makes multi-threaded programs very difficult to debug.

<ASIN:0596514808>

<ASIN:1430229799>

<ASIN:0262201755>

<ASIN:0596800959>

<ASIN:047043452X>

<ASIN:193435645X>

<ASIN:0596007124>

Last Updated ( Tuesday, 06 July 2010 )