The Perils of the Parallel For
Thursday, 07 October 2010
Article Index
The Perils of the Parallel For
The downside

Banner

The downside

Parallel  for  is easy and it looks as if it has just converted a loop that takes 6 seconds to one that takes 3 seconds – which is like doubling the machine's speed for free.

However unless you realise that parallel for isn’t magic then there could be problems ahead.

You can’t just convert any old for loop to a parallel for and expect everything to work – it only works if the for loop is essentially a set of independent operations.This sounds easy but it can be difficult to spot because parallel implementation can render everything you have come to expect simply wrong.

To show this in action let’s implement the equivalent of the bread and jam shopping list.

It is a truth drummed into every programmer by experience that if you have a for loop going in the positive direction then it is often perfectly safe to access the results of earlier iterations. That is, if you have reached index in the loop it is OK to look at the result of index-1, or index-2 or ... – but this isn’t the case in a parallel for.

Change the simple for loop to:

bool flag = true;
for (int i = 0; i < size; i++)
{
 data[i] = 1;
 if (data[new Random().Next(i)] == 0.0)
                            flag = false;
}
MessageBox.Show(flag.ToString());

The second statement in the loop checks to see if a random  earlier element in the array is zero. The method Random().Next(i) generates a random integer in the range 0 to less than i. As the loop progresses from data[0] to data[size] then when you reach data[i] all array elements  from data[0] up to data[i] have been set to a non-zero value i.e. 1.

That is, if you run this loop you are guaranteed to see flag=true.

Now consider:

Parallel.For(0, size, i =>
{
data[i] = 1;
if (data[new Random().Next(i)] == 0.0)
                           flag = false;
});
MessageBox.Show(flag.ToString());

In this case the order in which the elements are processed isn’t guaranteed and in most cases there will be an earlier element that hasn’t yet been processed and so the result is usually flag=false.

However, notice that the actual situation is much worse than simply getting a different answer. It is possible that the order of evaluation will be such that all of the tested elements are non-zero and in this case the result will be, very occasionally,  flag=true and so the outcome isn’t even predictable – it’s a race condition - and this makes programs non-deterministic. The result of the program can vary each time you run it as if its outcome was random.

The random nature of a program plagued by a race condition is often put down to an intermittent hardware fault!

Also notice that there is no easy fix for this problem. It isn't a problem due to locking or the use of a shared resource or .. any other fixable problem. As the algorithm that you are trying to implement needs to access the data in a particular order and a parallel implementation doesn't promise to evaluate in any particular order then the algorithm is inherently not parallel. You might well be able to re-cast the algorithm in form that is suitable for parallel implementation but as its stands it just doesn't work.

This order of evaluation problem is, of course, the reason why you only get a forward i.e. 1,2,3 4... Parallel For and not a reverse loop i.e. ...4,3,2,1 form. As the order or evaluation isn't fixed it would be a nonsense to let you specify it as in reverse order. Any algorithm that requires a loop with a particular order isn't inherently parallel.

The only type of Parallel For that is free of this problem is one that doesn't manipulate the index, i.e. all expressions are indexed by i and not say i+1 or i-3 etc.

Notice that this doesn’t mean that the parallel for is useless or even very primitive. Parallel.For is doing a great deal of work on your behalf. It is choosing how to partition the loop, creating the threads necessary, running the partitions one per thread and making sure that everything waits until the final thread completes the task. If you were to try to implement it from scratch you would need to generate a lot of code.

This is a great simplification over the DIY version and Parallel.For is well worth using. But at the end of the day you are still using separate threads to run different portions of the for loop and as such all of the usual problems arise and it is up to you to keep them under control.

A simple-minded approach is to restrict the use of Parallel.For to tasks that are truly isolated from one another and never do anything adventurous – but that would simply be missing an opportunity. There are algorithms that can be implemented using a Parallel For but they need some attention to resource sharing in the same way that a raw threads solution would.

More adventurous parallel programming soon.

If you would like to be informed about new articles on I Programmer you can either follow us on Twitter, on Facebook , on Digg or you can subscribe to our weekly newsletter.

 

Banner


Linq and XML

XML, which is all about tree structured data, and Linq, which is all about querying collections, might not seem to fit together, but they work together just fine.



XML in C# - Using XElement

.NET has some really easy-to-use facilities for creating and editing XML. Many of these facilities were introduced to make Linq to XML work better, but you can make use of them in more general situati [ ... ]


Other Articles

<ASIN:0321658701>
<ASIN:1451531168>



Last Updated ( Thursday, 07 October 2010 )
 
 

   
RSS feed of all content
I Programmer - full contents
Copyright © 2014 i-programmer.info. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.