The Perils of the C# Parallel For
Written by Mike James   
Tuesday, 20 January 2015
Article Index
The Perils of the C# Parallel For
The downside



The same algorithm used with a parallel for is much tricker. 

Now consider:

Parallel.For(0, size, i =>  {
   data[i] = 1;
   if (data[new Random().Next(i)] == 0.0)
                              flag = false;

In this case the order in which the elements are processed isn’t guaranteed and in most cases there will be an earlier element that hasn’t yet been processed and so the result is usually flag=false.

However, notice that the actual situation is much worse than simply getting a different answer. It is possible that the order of evaluation will be such that all of the tested elements are non-zero and in this case the result will be, very occasionally, flag=true and so the outcome isn’t even predictable – it’s a race condition - and this makes programs non-deterministic. The result of the program can vary each time you run it as if its outcome was random.

The random nature of a program plagued by a race condition is often put down to an intermittent hardware fault!

No Fix!

Also notice that there is no easy fix for this problem.

It isn't a problem due to locking or the use of a shared resource or .. any other fixable problem. As the algorithm that you are trying to implement needs to access the data in a particular order and a parallel implementation doesn't promise to evaluate in any particular order then the algorithm is inherently not parallel.

You might well be able to re-cast the algorithm in form that is suitable for parallel implementation but as its stands it just doesn't work.

This order of evaluation problem is, of course, the reason why you only get a forward i.e. 1,2,3 4... Parallel For and not a reverse loop i.e. ...4,3,2,1 form. As the order or evaluation isn't fixed it would be a nonsense to let you specify it as in reverse order. Any algorithm that requires a loop with a particular order isn't inherently parallel.

The only type of Parallel For that is free of this problem is one that doesn't manipulate the index, i.e. all expressions are indexed by i and not say i+1 or i-3 etc.

Notice that this doesn’t mean that the parallel for is useless or even very primitive.

Parallel.For is doing a great deal of work on your behalf. It is choosing how to partition the loop, creating the threads necessary, running the partitions one per thread and making sure that everything waits until the final thread completes the task. If you were to try to implement it from scratch you would need to generate a lot of code.

This is a great simplification over the DIY version and Parallel.For is well worth using. But at the end of the day you are still using separate threads to run different portions of the for loop and as such all of the usual problems arise and it is up to you to keep them under control.

A simple-minded approach is to restrict the use of Parallel.For to tasks that are truly isolated from one another and never do anything adventurous – but that would simply be missing an opportunity. There are algorithms that can be implemented using a Parallel For but they need some attention to resource sharing in the same way that a raw threads solution would.

More adventurous parallel programming soon. 



Related Articles

Plumbing - parallel programming for everyone     

Lambda Expressions 


To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, FacebookGoogle+ or Linkedin,  or sign up for our weekly newsletter.



Deep C# - Casting the Escape from Strong Typing

Casting is one of the most confusing aspects of any modern language and it often makes beginners think hard. But if you know why you are doing it, then the how makes a lot more sense. We have encounte [ ... ]

Deep C# - Interface

Interfaces - what are they for? Not quite inheritance yet they seem to fit the same purpose. Find out in this extract from my new book, Deep C#: Dive Into Modern C#.

Other Articles


C book



or email your comment to:





Last Updated ( Tuesday, 20 January 2015 )