|Raspberry Pi IoT In C - Events & Interrupts|
|Written by Harry Fairhead|
|Monday, 02 November 2020|
Page 1 of 4
How to work with inputs is a difficult problem. One common solution is the interrupt, but for the Pi, and Linux in general, events are better. This is an extract from the newly-published Raspberry Pi IoT in C, Second Edition.
Raspberry Pi And The IoT In C Second Edition
By Harry Fairhead
Buy from Amazon.
Advanced Input – Events, Threads, Interrupts
As already discussed in earlier chapters, the big problem with input is that it happens when it wants to and usually not when you decide. What this means is that you usually have to check for input repeatedly just to see if there is any – hence the polling loop.
Most programmers find polling loops inelegant, although they are a perfectly efficient way of getting the job done as long as they are well designed. However, if you don’t poll for input at the right time, a change can be completely missed. If a GPIO line goes high and then low while the polling loop is doing something else, the input is lost forever. There are two common solutions, events and interrupts.
Events and interrupts are related concepts but there are some important differences.
An event is like a latch or a memory that something happened. Imagine that there is a flag that will be automatically set when an input line changes state. The flag is set without the involvement of software, or at least any software that you have control over. It is useful to imagine an entirely hardware-based setting of the flag, even if this is not always the case. With the help of an event you can avoid missing an input because the polling loop was busy doing something else. Now the polling loop reads the flag rather than the actual state of the input line and hence it can detect if the line has changed since it last polled. The polling loop resets the event flag and processes the event. Of course, it can’t always know exactly when the event happened, but at least it hasn’t missed it altogether.
A simple event can avoid the loss of a single input, but what if there is more than one input while the polling loop is unavailable? The most common solution is to create an event queue – that is, a FIFO (first in, first out) queue of events as they occur. The polling loop now reads the event at the front of the queue, processes it and reads the next. It continues like this until the queue is empty, when it simply waits for an event. As long as the queue is big enough, an event queue means you don’t miss any input, but input events are not necessarily processed close to the time that they occurred. They should be processed in order, but unless the events are time stamped the program has no idea when they happened.
An event queue is a common architecture, but to work, or have any advantages, it needs either multiple cores so that events can always be added to the queue before another occurs or it needs the help of hardware, usually in the form of interrupts. Notice that an event, or an event queue, cannot increase the program’s throughput or its latency – the time to react to an input. In fact, an event queue decreases throughput and increases latency due to overheads of implementation. All an event system does is to ensure that you do not miss any input and that all input gets processed eventually.
Interrupts Considered Harmful?
Interrupts are often confused with events but they are very different. An interrupt is a hardware mechanism that stops the computer doing whatever it is currently doing and makes it transfer its attention to running an interrupt handler. You can think of an interrupt as an event flag that, when it is set, interrupts the current program to run the assigned interrupt handler.
Using interrupts means the outside world decides when the computer should pay attention to input and there is no need for a polling loop. Most hardware people think that interrupts are the solution to everything and polling is inelegant and only to be used when you can’t use an interrupt. This is far from the reality.
There is a general feeling that real-time programming and interrupts go together and if you are not using an interrupt you are probably doing something wrong. In fact, the truth is that if you are using an interrupt you are probably doing something wrong. So much so that some organizations are convinced that interrupts are so dangerous that they are banned from being used at all.
Interrupts are only really useful when you have a low frequency condition that needs to be dealt with on a high priority basis. Interrupts can simplify the logic of your program, but rarely does using an interrupt speed things up because the overhead involved in interrupt handling is usually quite high.
If you have a polling loop that takes say 100ms to poll all inputs and there is an input that demands attention in under 60ms then clearly the polling loop is not going to be good enough. Using an interrupt allows the high priority event to interrupt the polling loop and be processed in less than 100ms. However, if this happens very often the polling loop will cease to work as intended. Notice an alternative is to simply make the polling loop check the input twice per loop.
For a more real world example, suppose you want to react to a doorbell push button. You could write a polling loop that simply checks the button status repeatedly and forever or you could write an interrupt service routine (ISR) to respond to the doorbell. The processor would be free to get on with other things until the doorbell was pushed when it would stop what it was doing and transfer its attention to the ISR.
How good a design this is depends on how much the doorbell has to interact with the rest of the program and how many doorbell pushes you are expecting. It takes time to respond to the doorbell push and then the ISR has to run to completion - what is going to happen if another doorbell push happens while the first push is still being processed? Some processors have provision for forming a queue of interrupts, but it doesn't help with the fact that the process can only handle one interrupt at a time. Of course, the same is true of a polling loop, but if you can't handle the throughput of events with a polling loop, you can't handle it using an interrupt either because interrupts add the time to transfer to the ISR and back again.
Finally, before you dismiss the idea of having a processor do nothing but ask repeatedly "is the doorbell pressed", consider what else it has to do. If the answer is "not much" then a polling loop might well be your simplest option. Also, if the processor has multiple cores, then the fastest way of dealing with any external event is to use one of the cores in a fast polling loop. This can be considered to be a software emulation of a hardware interrupt – not to be confused with a software interrupt or trap, which is a hardware interrupt triggered by software.
If you are going to use interrupts to service input then a good design is to use the interrupt handler to feed an event queue. This at least lowers the chance that input will be missed.
Despite their attraction, interrupts are usually a poor choice for anything other than low frequency events that need to be dealt with quickly.
|Last Updated ( Saturday, 07 November 2020 )|