| Programmer's Python: Async - Shared Memory |
| Written by Mike James | |||||
| Tuesday, 28 October 2025 | |||||
Page 3 of 4
Raw Shared MemoryAs well as the shared ctypes approach to sharing memory, you can also do the job at a much lower level in terms of bytes. This is slightly faster and slightly simpler if you are already working with bytes rather Python data. For example, if you are reading data from a connected device it might well provide the data as a byte sequence. Using shared memory is easy. Most of the problems that you will encounter are due to the need to convert the data to a byte sequence. To create or connect to an area of shared memory you use: multiprocessing.shared_memory.SharedMemory(name = None, If create is True the memory block is created and if you don’t assign a name one is generated. The name attribute can be used to retrieve the name. The size parameter gives the minimum number of bytes that are allocated to the area. As memory often works in multiples of a fixed page size you may well get more than you asked for. You can use the size attribute to discover the number of bytes actually allocated. Usually one of the processes using the shared area will create the block and the others will connect to it using create=False and specifying the name of the block. In this case the size parameter is ignored. The shared memory object has a small number of attributes to let you work with the block:
Each process should call close when they have finished using the block and one of the processes should call unlink to signal that they have all finished and the block can be destroyed. As a simple example, we can implement the myUpdate counting example given earlier. As the shared resource is a byte sequence we can only count up to 255 using a single element of the buffer, but this is sufficient to demonstrate the principle: import time
import multiprocessing
import multiprocessing.shared_memory
def myUpdate(name,lock):
mySharedMem=multiprocessing.shared_memory.
With the lock in use the final count is 255. If you remove the lock the result is lower due to race conditions. Notice that each of the child processes closes the memory block, but only the main process unlinks from it so as to allow the operating system to reclaim the memory. A simplification is that you can pass the mySharedMem object directly to the child process and avoid having to explicitly connect to it: p1 = multiprocessing.Process(target = myUpdate, and then you can use mySharedMem in myUpdate without having to use: multiprocessing.shared_memory.SharedMemory( As already stated, the real problem with working with shared memory is that everything has to be reduced to a byte sequence. For example, if you want to extend the counter example to count to more than 255 you need to interpret the bytes as an integer. This is possible using: int.to_bytes(length, byteorder, signed = False) and: int.from_bytes(bytes, byteorder, signed = False) These methods convert a bignum into a sequence of bytes and back again and are discussed in Programmer’s Python: Everything Is Data. Using them we can convert the previous example to count to more than 255: import time import multiprocessing This counts reliably to 2001, unless you remove the lock in which case race conditions ensure that it is smaller. It isn’t particularly fast because of all of the packing and unpacking of eight bytes to and from the buffer. There are ways of converting almost any Python data structure into a byte sequence and back again – this is a topic of Programmer’s Python: Everything Is Data. You can even use pickling to convert general objects into a data stream and ctypes to use a C representation, but if you are doing this why not just use the shared ctypes approach. Raw shared memory only has an advantage when the data is already a byte sequence or is very easy and cheap to convert into one. |
|||||
| Last Updated ( Wednesday, 29 October 2025 ) |