Interrupts
Article Index
Interrupts
Using registers

I’ve started so I’ll finish

Let’s start with the simplest part of the problem – the registers. Every processor has a set of registers that it uses to obey instructions. It is obvious that when a task switch occurs these registers have to be saved and then restored; then the original task starts again. It is also clear that an interrupt can’t interrupt a processor in the middle of an instruction in such a way that the instruction cannot be restarted. To allow for all this the sequence of events is something like:

1) Interrupt signalled

2) Processor notices interrupt but finishes the current instruction of task one

3) All processor registers are saved

4) The program counter is loaded with the address of the interrupt routine

5) The interrupt routine does its stuff and finally gives an RTI (ReTurn from Interrupt) instruction

6) This loads all the registers with all the values they had before the interrupt and task one starts again

Notice that as the program counter is saved, task one restarts from exactly where it was and everything will work as before, as long as the interrupt routine doesn’t change any memory locations that task one is using or tries to use any hardware that task one is using. These last two conditions are fairly easy to ensure in small systems but they become increasingly difficult to deal with as the system becomes larger and there are more and varied tasks running.

Where do you store the registers during a task swap? If you remember the Babbage’s Bag on stacks you will already have guessed that the system stack is the best place for them. It allows the machine to handle as many interrupts as you like and the RTIs all work correctly – just like subroutines.

Indeed interrupt procedures are just like subroutines that are called when a piece of external hardware decides, rather than a natural result of the flow of control through a program. This point of view makes interrupts seem familiar and cosy but they aren’t and they contain some very nasty traps!

Faster or safer

At this point the real world enters the picture and the small question of how long it takes to stack all of those registers starts to matter. There are systems where the time it takes to get to an interrupt routine – the latency – is important. For example, if the system is controlling a chemical reaction it might be vital that it responds to a sensor setting in as fast as possible. In such cases even interrupting part way through some instructions may be worthwhile and the time to stack all those registers is definitely not a good idea!

Some processors solve the problem by not stacking registers before an interrupt but leave it to the interrupt routine to stack any registers it needs to make use of and then restore the registers it used before doing the RTI instruction.

This is fine but there are two problems – if the interrupt routine needs to use all of the registers it still has to stack them all and there is always the possibility that it will not restore them properly – bugs happen.

There have been lots of solutions to the problem, the best probably being the provision of a complete duplicate set of internal registers that are simply swapped in one operation. This worked but only in small systems for obvious reasons. Today it is generally recognised that for any general-purpose system the hardware should be optimised to make a task switch as fast as possible but still using the stack to store everything. This has become easier as the hardware has become more sophisticated.

Paradoxically, it has also become harder as the hardware has become more sophisticated! For example, most modern processors such as the Pentium make use of a cache memory and a “pipeline” of instructions to speed things up. You can think of the pipeline as a sort of assembly line approach to obeying a program where lots of instructions are in the process of being obeyed. Much of the speed of a modern processor comes from its pipeline construction and there is nothing worse than an action which causes the pipeline to stall – that is have to be cleared and restarted.

Guess what an interrupt does! To avoid unwanted interaction between tasks the instruction pipeline has to be restarted when an interrupt occurs.

In the real world there is still a fundamental trade off between fast and safe interrupts. You can have a fast interrupt if you are happy about leaving the control of interaction between tasks to the writer of the interrupt routine or you can have safe interrupts if you are not too fussy about latency or overall performance.

The two-culture problem

There are two very definite views of interrupts depending on whether or not you are working with real time systems or bigger information processors. If you are building a device to control something – the temperature of a chemical process say – then the processor that you use is small and optimised for real-time control. It generally doesn’t have an instruction pipeline and there is no need to stack everything when an interrupt occurs.

The software is probably all written by one team of programmers and there is hardly any operating system in the way of getting the job done. In this environment it is hard not to think about interrupts as a blessing. You can write interrupt handlers for all of the hardware devices that need to be use and ensure that they all interact to just the right degree. Actually in practice this isn’t as easy as it sounds and any bug that does get into the system can be next to impossible to find.

Consider for a moment the way that non-interrupt-using programs are debugged and tested. They are taken through a standard sequence of inputs and actions. Such programs always produce the same outputs for the same explicit inputs – if they don’t we generally suspect a hardware fault of some kind! Now consider a program with a number of interrupt routines. In this case the exact sequence of events depends on the exact timing and order of the interrupts. Perhaps an error only makes itself known when interrupt A occurs while interrupt routine B is working after interrupt routine X. You can see that because the interrupts are caused by the outside world the number of ways that the program can execute the instructions is huge and the potential for undiscovered unwanted interactions is also huge.

Programs that don’t use interrupts are often called “deterministic” because they always do the same thing. Programs that use interrupts are called “asynchronous” or “non-deterministic” because of the way that they hop about all over the place while working.

In simple systems you can just about keep control of a non-deterministic program but as the system’s complexity increases non-deterministic programs become next to impossible to make work properly. So much so that most of the errors that occur in interrupt driven programs are put down to intermittent hardware faults!

<ASIN:007136207X>

<ASIN:0763741493>



 
 

   
RSS feed of all content
I Programmer - full contents
Copyright © 2014 i-programmer.info. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.