Deep C# - Threading,Tasks and Locking
Written by Mike James   
Wednesday, 22 October 2025
Article Index
Deep C# - Threading,Tasks and Locking
Starting A Thread
Ensuring Thread Safety
Exclusion Using Locking
Deadlock
Background Worker
Task Parameters

The lock using the current instance works equally well but imagine what would happen if the monitor was protecting a resource shared by multiple instances of the class. The result would be messy, to say the least, as each instance would obey the lock but threads from different instances would access it at the same time. Similarly a lock on this isn’t very useful for controlling access from different objects. Another problem with locking on the current instance is that you might well forget that it is being used to protect a particular resource and accidentally use it to protect another, completely unconnected, resource. This would result in a thread accessing resource one, unnecessarily blocking all access to resource two.

In many ways it is better to create an object specifically to be used to lock a particular resource and include the name of the resource in the name of the lock. Don’t make the common mistake of using a string or a value object to lock because there are pitfalls in using both. The string might well end up being shared due to optimization and the value object would be boxed and unboxed each time it was used, nullifying the effect of the lock.

Another potential problem with using Monitor in the way described is that if the code crashes while it has a lock then the lock never gets released. Similarly, you could accidentally forget to release the lock or attempt to release the lock on the wrong object. You can avoid the problem of a crash by wrapping the code in a try-catch statement, but it’s much easier to use the equivalent lock statement. That is:

lock(object){list of instructions}

is equivalent to: 

try{
		Monitor.Enter(object);
		list of instructions
}
finally{Monitor.Exit(object); 

In other words, lock will try to obtain a lock using the specified object before executing the list of instructions within a try. No matter what happens, you can be sure that the lock will be released so that other threads can use the resource. For example, the previous code can be written in a more robust way as:

public void A()
{
    	for (int i = 0; i < 10000000; i++)
    	{
        	lock (this)
        	{
            		count++;
        	}
   	 }
}

 

Notice that while this is rather more foolproof than using the basic Monitor methods, a thread that doesn’t play by the rules and simply accesses the resource will spoil everything. The point is that you can’t enforce locking, just hope that everyone remembers to use it.

There are other Monitor methods that are sometimes useful. For example, the TryEnter method will attempt to acquire a lock, after waiting for a specified time, but will allow the thread to continue if the lock cannot be acquired. Clearly in this case you need to test the return value, a Boolean, to see if the lock has been acquired and do something different if it hasn’t.

The wait method will allow the thread that currently has the lock to free it and allow other threads to acquire it while it waits for a signal from another thread before attempting to acquire the lock again. Another thread, one that currently has the lock, can signal to the next waiting thread, (or to all waiting threads) to try to acquire the lock by using the pulse or pulseall method. To understand how this might be used, consider a thread that processes a buffer that is filled by another thread. The processing thread can call wait when it has finished processing the buffer and allow the filling thread to access it. As soon as the filling thread has finished its work, it can use pulse to tell the processing thread to try to acquire the lock and start work again. The clever part is that this mechanism generalizes to multiple work-creating and work-consuming threads and they can all queue in an orderly fashion to access the resource using wait and pulse.

Deadlock

There are other problems with locking and the most celebrated is perhaps the deadlock condition. Put simply, if thread A locks resource one and thread B locks resource two everything is fine unless thread A also wants a lock on resource two before it can complete and if thread B needs a lock on resource one before it can complete. The result is that both threads spend forever waiting for the other to finish and release the resource. This is deadlock and it can occur in much more complicated ways than this simple “A waits for B which waits for A” situation. It is possible to create a deadlock ring of dependency by having A wait for B, which waits for C, which waits for D which is waiting for A.

There isn’t much you can do about deadlock except to be aware of it and design your access strategies with a great deal of care. You can try to avoid locking threads on more than one lock at a time but this can slow things down to unacceptable levels as threads have to wait while another thread acquires an oversized lock on resources, some of which it isn’t actually using. A better strategy is to attempt to acquire all of the locks that a thread needs to complete at the start and release any that have been acquired if it isn’t possible to acquire them all. Again this can result in a loss of performance.

Multi-threading with locks isn’t easy and carries the seeds of disaster. Multi-threading without locks is easy but is always guaranteed to be a disaster.

You can do most of what you need just with the Monitor, but .NET does provide other locking facilities. For example, the mutex provides locking across process boundaries and the semaphore can be used to control the number of threads that can access a resource. All of these work in similar ways to the monitor and you should have no problems in understanding how they work – but if the Monitor does the job then use it.



Last Updated ( Wednesday, 22 October 2025 )