Programmer's Python: Async - Locks
Written by Mike James   
Wednesday, 17 September 2025
Article Index
Programmer's Python: Async - Locks
Hardware Problem or Heisenbug?

Locks are fundamental to asychronous programming.  Find out how to use them in this extract from Programmer's Python: Async.

Programmer's Python:
Async
Threads, processes, asyncio & more

Is now available as a print book: Amazon

pythonAsync360Contents

1)  A Lightning Tour of Python.

2) Asynchronous Explained

3) Processed-Based Parallelism
         Extract 1 Process Based Parallism
4) Threads
         Extract 1 -- Threads
5) Locks and Deadlock
         Extract 1 -  Locks ***NEW!

6) Synchronization

7) Sharing Data
        Extract 1 - Pipes & Queues

8) The Process Pool
        Extract 1 -The Process Pool 1 

9) Process Managers
        Extract 1- Process Manager

10) Subprocesses 

11) Futures
        Extract 1 Futures

12) Basic Asyncio
        Extract 1 Basic Asyncio

13) Using asyncio
        Extract 1 Asyncio Web Client
14) The Low-Level API
       Extract 1 - Streams & Web Clients
Appendix I Python in Visual Studio Code

 

The key issue in multi-processing and multi-threading is how to communicate data. For processes things are both simplified and complicated by the fact that there is almost total isolation between different processes. That is, each process has its own set of variables and there are no automatically shared resources. With processes the problem is establishing communication.

With threads, on the other hand, all threads in a single process share the same memory and in particular they all have access to the same global variables. As a result you don’t have to do anything much to establish communication between threads. The only thing you have to do is to ensure that access to shared resources is controlled so that updates by more than one thread do not compete and invalidate data. Essentially, you have to synchronize actions between threads and the most common way of achieving this is to use a lock.

In this chapter we look at the Lock class, the simplest and most common of the many types of lock that are available. Processes also have the problem of synchronization and so locks are relevant to them as well. We first look at the nature of the problem and then examine how locking solved it, but creates another problem in its wake – deadlock.

Race Conditions

A race condition occurs when the outcome of two or more operations isn’t completely determined because the order in which they could be executed isn’t fixed. It is called a race condition because you can think of the outcome as depending on which operation reaches the finishing line first. The actual jargon associated with race conditions is obscure and I’m not going to enumerate all of the possible types of race condition, or indeed what exactly qualifies as a race condition. To be pragmatic, all that really matters is that a race condition means the result you get at runtime varies in ways that you might not have expected in a single threaded program.

The classic race condition is when two or more threads attempt to access a single resource and what happens depends on the order they access it. Notice that in this sense a race condition can only occur if at least one of the threads is modifying the resource whereas a shared resource can be read without restriction or potential problems. Reading is safe, writing is dangerous.

This is also the reason that immutable data structures are preferred.

Of course, to communicate, threads have to write to shared data and this makes things harder. Finding a good but simple example of a race condition is difficult, especially so since the improvements in the way the GIL is managed. Unless a thread gives up the GIL it can run for up to 5 ms without interruption and this makes it difficult to capture an example where two threads are accessing the same resource close in time. For example, prior to the update you could simply use an increment a += 1 to demonstrate a race condition – two threads incrementing the same global variable soon displayed problems. The cause of the problem is that myCounter += 1 isn’t an “atomic” instruction as it is composed of a number of actions.

That is the update is:

  1. Retrieve the value in myCounter
  2. Add one to it
  3. Store the value back in myCounter


These three steps occur one after the other and it is quite possible that another thread will take over after any of the steps. If the new thread is running the same code then it too will perform the three steps, but it will retrieve the same value of myCounter as the first thread. Suppose myCounter is 42 and the first thread retrieves its value and is then replaced by another thread which also retrieves the value in myCounter, i.e. 42. The second thread will add one and save the result, i.e. 43 and then the first thread is restarted and it too adds one and stores the result, i.e. 43. Of course, the correct answer should be 44 as both threads should have incremented myCounter.

lock1

It should be easy to write a program that demonstrates this classic race condition, but the way that the GIL works makes this difficult as the 5 ms runtime makes the probability that another thread will interrupt the running thread during the increment very low, but not zero. To improve the chances of seeing the problem we need to spread the increment over a longer time to make it more likely that a race condition will occur. For example:

from cmath import sqrt
import threading
import time
myCounter=0
def count():
    global myCounter
    for i in range(100000):
        temp=myCounter+1
        x=sqrt(2)
        myCounter=temp
thread1= threading.Thread(target=count) thread2= threading.Thread(target=count) t1=time.perf_counter() thread1.start() thread2.start() thread1.join() thread2.join() t2=time.perf_counter() print((t2-t1)*1000) print(myCounter)

The count function simply adds one to the global counter, but it does so via a local variable. The time between retrieving the initial value and updating it before storing it back is large enough for another thread to take over execution in the middle. To make this even more likely, a sqrt is calculated as this calls a pure C function to do the calculation and so frees the GIL, inviting another thread to take over. You could use time.sleep(0) in place of sqrt, but this demonstrates that the GIL can be released even when you don’t explicitly try to do so.

Even so, when you run this program occasionally you will see the correct answer of 200000 but you should see a smaller value much more often. The value is smaller because of the number of times two overlapping increments occurred and count was only incremented by one instead of two.



Last Updated ( Wednesday, 17 September 2025 )