The Perils of the C# Parallel For
Written by Mike James   
Tuesday, 20 January 2015
Article Index
The Perils of the C# Parallel For
The downside

Making parallel code easier to use is an important development, but making it easier also means you can use it without realising what you are getting into. Unless we can find a way of making parallel code both easy and safe you are still going to have to take care - even with the easy-to-use Parallel For.

Banner

 

I’m not being melodramatic, well a little perhaps, but there really is danger whenever you do anything in parallel and especially so when it packaged in a friendly way so that you might not even notice.

Making parallel code easier to use is an important development but making it easier also means you can use it without realising what you are getting into.

Unless we can find a way of making parallel code both easy and safe you are still going to have to take care - even with the easy-to-use Parallel For.

The .NET parallel extensions are great. They are easy to use and provide a huge payback – but, and this is a big but, parallel programming isn’t easy. If it was we would be using it all over the place and the parallel extensions would be nothing new.

Shopping Problems - Race Condition

Parallelism is easy if you restrict your attention to completely unrelated tasks. If you have task A which has nothing to do with task B then you can simply say to a processor “get on with task A” and to another processor “get on with task B”.

For example, if you have a shopping list then you can tear it in two and give one to shopper A and one to shopper B and tell them to both do the shopping at the same time. When they meet up at the checkout there is no problem and the shopping is done in half the time.

Things start to go wrong when the tasks are interrelated. For example, if the top half of the shopping list says

“buy bread”

and the bottom half says

“if you bought some bread buy some jam”

then there are problems.

Now when you tear the list into two and send off shopper A and B what you get back at the checkout varies – sometimes you get bread and no jam, sometimes bread and jam, and very rarely jam and no bread (if shopper A picks up a loaf, but at the last minute notices it’s stale and puts it back without getting another one).

If the list is processed by a single shopper buying things in the order the list specifies there is no problem. As soon as you split the list then what you get depends on whether or not shopper A has bought some bread at the point shopper B checks and tries to by jam.

This is called a “race condition” because the outcome depends on the timings of the processing being run independently and it is just one of the horrors of parallel programming. That is it is a "race" condition because it depends who, or rather which processor, gets there first.

Until recently the average programmer didn’t often meet a race condition because most programs were by their basic construction single-threaded and if you wanted to create a multi-threaded program it was quite hard work and you needed to learn how to do it.

This is where Parallel For enters the picture.

csharp

The Parallel For

The Parallel For makes it so easy that you can start parallel programming in a few minutes - no threads, no tasks nothing at all that looks any different from normal programming..

When you get a little deeper you discover that you have to do things like lock shared resources and so on but, in practice, problems can arise much earlier and at a much more basic level if you haven't thought about the consequences of implementing something in parallel.

All you have to do to get started is download Visual Studio, and you have the .NET parallel extensions ready to use - the extensions are a standard part of all recent .NET distributions.

To see parallel programming in action start a new C# project and add:

using System.Threading;
using System.Threading.Tasks;

System.Threading is the original threading library and Tasks is where the new parallel features live.

Next place a button on a form and write a simple for loop in its click event handler:

int size = 1000000;
double[] data = new double[size];
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < size; i++){
 data[i] = Math.Pow(
   new Random().NextDouble(), 0.6);
}
MessageBox.Show("done " +
     sw.Elapsed.TotalMilliseconds.ToString());
sw.Reset();

The loop simply does some arithmetic on a lot of random data. The Stopwatch class, is used to time how long the loop takes and to make use of this very useful class all you need to add is:

using System.Diagnostics;

It can record time accurately across multiple threads and has been available in .NET since 3.5.

We can turn this loop into a parallel loop very easily. The Parallel static class has a for method which accepts the start and end value for the loop and a delegate to execute.

In general the command is:

Parallel.For(start,end,delegate);

The loop is run from start to end-1 and must run in the forward direction, that is from smaller to bigger index values.

There is a range of variations of the Parallel.For comand including parameters to control the parallel iteration and a Parallel.ForEach. However, the problem that we are about to encounter can happen with all of them.

The simplest way to provide the delegate is to use a lambda expression and so the following example will execute the code for values of I from 0 to size-1.

sw.Start();
Parallel.For(0, size, i =>  {
     data[i] = Math.Pow(
        new Random().NextDouble(), 0.6);
    });
sw.Stop();
MessageBox.Show("done " +
   sw.Elapsed.TotalMilliseconds.ToString())

This all works and the results are impressive. On a dual core machine the simple loop takes around 6500ms and the parallel loop takes 3500ms which isn’t quite half but a very worthwhile speed up.

If you use the task manager to monitor CPU usage you will see that the simple loop uses 100% of one of the cores but the parallel loop uses 100% of both cores. This in itself might not exactly be a good thing, as your parallel loop is a bit of a processor hog, but these are the sort of simple decisions that you have to make.

The bottom line is that the Parallel for is very easy to use and give an real gain in performance. It provides a lot for very little effort.

The Downside

Parallel  for  is easy and it looks as if it has just converted a loop that takes 6 seconds to one that takes 3 seconds – which is like doubling the machine's speed for free.

However unless you realise that parallel for isn’t magic then there could be problems ahead.

You can’t just convert any old for loop to a parallel for and expect everything to work – it only works if the for loop is essentially a set of independent operations.This sounds easy but it can be difficult to spot because parallel implementation can render everything you have come to expect simply wrong.

Accessing Earlier Results

To show this in action let’s implement the equivalent of the bread and jam shopping list.

It is a truth drummed into every programmer by experience that if you have a for loop going in the positive direction then it is often perfectly safe to access the results of earlier iterations.

That is, if you have reached index in the loop it is OK to look at the result of index-1, or index-2 or ... – but this isn’t the case in a parallel for.

Change the simple for loop to:

bool flag = true;
for (int i = 0; i < size; i++){
 data[i] = 1;
 if (data[new Random().Next(i)] == 0.0)
                             flag = false;
}
MessageBox.Show(flag.ToString());

The second statement in the loop checks to see if a random  earlier element in the array is zero. The method Random().Next(i) generates a random integer in the range 0 to less than i. As the loop progresses from data[0] to data[size] then when you reach data[i] all array elements  from data[0] up to data[i] have been set to a non-zero value i.e. 1.

That is, if you run this loop you are guaranteed to see flag=true.

Banner

 

<ASIN:161729134X>

<ASIN:0321658701>

 



Last Updated ( Tuesday, 20 January 2015 )