Applying C - Locking |
Written by Harry Fairhead | ||||
Tuesday, 27 May 2025 | ||||
Page 2 of 3
MutexWe have already encountered one locking mechanism in an earlier chapter, the semaphore. The mutex, however, is one of the simplest and most used forms of locking. A mutex can be locked by a single thread. If another thread tries to obtain the lock on the mutex it is suspended and waits until the thread that has the lock releases it. Notice the mutex variable has to be accessible to all of the threads that are going to use it and it has to have the same lifetime. Only the thread that locked the mutex can unlock it and the thread that has the lock cannot try to lock it again – both are undefined behavior. Finally, it has to be kept in mind at all times that mutex locking is cooperative and voluntary. If a thread wants to do something it can ignore any locking that you have provided. A mutex also only works within a single process, i.e. it is a way of synchronizing threads within a process. If you need to synchronize processes then you need to use pthread’s semaphore. You can create a mutex in two ways. You can declare it and initialize it to a default: pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; Generally it is better to declare the mutex outside of any functions, i.e. as a static variable. This ensures that it is initialized and ready to be used. The second way is to use an initialization function: pthread_mutex_t mymutex; pthread_mutex_init ( &mymutex, &attr); The advantage of this is that you can use an attribute object to specify how you want the mutex to be initialized. You can only initialize a mutex once. Once you have finished using a mutex, you can remove it with pthread_mutex_destroy. A mutex is initially created unlocked. To lock it you can use: pthread_mutex_lock (&mymutex); If the mutex is already locked the thread is suspended and it waits for the lock to become free. In other words, the lock function only returns when it has the lock on the mutex. The thread that locked the mutex can unlock it using: pthread_mutex_unlock (&mymutex); The unlock function always returns at once and the thread that unlocked the mutex continues. The operating system, at some undefined time, will select one of the threads that are waiting on the lock to wake up. That thread then acquires the lock and its call to mutex_lock returns. The other threads, if there are any, continue to wait. Notice that there is no way to know which waiting thread is started, or exactly when, and this shouldn’t make any difference to your program. Going back to our counting example, adding a mutex to the increment is easy: int counter; pthread_mutex_t mymutex=PTHREAD_MUTEX_INITIALIZER; void * count(void *p) { for (int i = 0; i < 5000; i++) { pthread_mutex_lock (&mymutex); counter++; pthread_mutex_unlock (&mymutex); } return &counter; } Now when you run the program the answer is always 10,000 as the two threads cannot interfere with each other’s update. Problem solved, but it might not be an acceptable solution. In this case the time between unlocking and locking is very small and once a thread gains the lock any other thread only has a very small window of opportunity in which to acquire the lock. In this sense, the program is closer to one where the first thread to gain the lock keeps it until all 5000 updates have been completed. Locking is safe but it may not be fair in the sense that other threads may not get a chance to progress after the first thread gains the lock. It is up to the operating system to make sure that the threads waiting on the lock get a turn, but there are no guarantees that this will happen. If one thread holds the lock for most of the time, the other threads suffer “starvation” as they cannot acquire the lock often enough to make progress. In most programs the lock is acquired for a short time and then released, making starvation unlikely. If you have to deal with a greedy thread problem then you need something more sophisticated than a mutex – possibly a condition variable, see later. A quick fix is to use the yield function from sched.h which is also a POSIX standard. This causes the thread to release the processor and allow another thread to start. If you include: sched_yield(); after each of the calls to unlock then you will discover that each of the threads has a more equal share of the processor’s time, but the overall efficiency goes down due to the overhead involved in yielding after each increment. In the real world it can be very difficult to work out what is happening but the principle is easy - make sure that a lock is held for the minimum time by each thread, and that on average it is unlocked for more time than it is locked. |
||||
Last Updated ( Tuesday, 27 May 2025 ) |