What Is Asynchronous Programming?
Written by Mike James   
Friday, 04 March 2022
Article Index
What Is Asynchronous Programming?
Asynchronous Code
Callbacks

Asynchronous programming has become very important in the last few years, but many programmers find out about it by doing it. So what exactly is asynchronous programming, why is it necessary and why is it growing in importance?

PromiseA

This is the first of two articles on asynchronous programming:

  1. What Is Asynchronous Programming?

  2. Managing Asynchronous Code - Callbacks, Promises & Async/Await

Before getting started, this is a down to earth introduction to asynchronous programming and not a mathematical analysis. It is possible to make the whole subject seem so complicated that it is a miracle that we make use of it at all. 

Asynchronous programming has been with us from the very early days of computing because of the need to make the best use of the hardware. But recently it has become almost the standard programming paradigm. So much so that you could say that most programs written today are object oriented asynchronous programs.

Often the programmer is fully aware that what they are doing is object oriented but only vaguely aware that they are writing asynchronous code. 

It all starts with the User Interface (UI). If you allow the user to interact with your program by clicking buttons and selecting object then you have an immediate problem

What is your program doing when the user isn't clicking or selecting something?

The obvious answer is nothing at all. Your program has to wait until the user gives it something to do. This is the case whenever your program interacts with the user even if it is simply waiting at the command line for the user to type something in. 

You can implement this sort of system using polling. That is your program can go around the UI asking the question "has this button been clicked" and "has this button been clicked" and so on. This ties up resources looping around the UI just checking in case something has happened. 

This isn't how most UI frameworks organize things. Instead they implement an event handling system. The user clicking a button is defined as an event and you can associate code with each event - the event handler. When the event occurs the event handler is run. 

This is such a familiar pattern that we hardy give it a second thought but you should. For example, suppose there are three buttons on the screen and the user clicks on all three at high speed - do three different event handlers get started? 

In most cases the answer is no only one even handler is started at a time. The most common event handling architecture is single threaded - that is it only has one thread of execution and at any one time only one instruction in your entire program is being obeyed. 

This single threaded event handling system is nearly always implemented using an event or message queue.

The idea is that while your program is doing nothing its thread looks after the event queue. When an event happens a record of the event is added to the queue and when your program's thread isn't doing anything it looks at the event queue and takes the first event from the front and starts running the corresponding event handler. When that event handler completes the thread goes back to the event queue to process any events that might have happened in while it was busy. 

So events are added to the event queue and the UI framework provides a dispatcher that runs on your thread and calls the event handlers as needed. At any moment the program's thread is either in the dispatcher finding out what event it has to process next or in an event handler. Of course if there is no event to process the thread just idles waiting for an event to be added to the queue.

This is a single threaded event system.

The OS, Threading And True Parallelism

You might at this point ask the very reasonable question of how an event gets put into the event queue while the program's thread is off running an event handler? The answer is that the operating system has lots of threads and it can respond to the user and place a message into the event queue. 

Now we hit a subtle point that might confuse you if you don't know much about operating systems and hardware.

Until quite recently there only ever was one thread of execution in a typical machine. The multiple threads that were provided by the operating system are created by preemptive scheduling - only one thread runs at any one time and the OS switches which is the active thread.

In this sense the entire machine OS is a single threaded system. The main distinction being that the OS is preemptive and threads can be interrupted part way though and an event driven system is cooperative in the sense that an event handler, once  started runs to completion.

It is this preemptive - cooperative difference that is important. 

The final complication is that modern machines have multiple cores and this means that they can support more than one thread of execution and hence are capable of true parallelism.

For the moment we can ignore this detail - important though it is.

The most important distinction is that in a preemptive system the programmer doesn't have to worry about keeping the main thread occupied. Every few milliseconds the hardware interrupts the running thread and starts it running the operating system. The operating system picks another process to start and runs it. In a preemptive system the one thread is automatically shared fairly between all of the processes no matter how they are programmed.

Asynchronous Code

So a single threaded event system works by placing events in a queue and processing them one-by-one calling the appropriate event handler. The event handler runs until it completes when it returns control to the dispatcher which deals with the next event in the queue. 

This is an asynchronous system because you cannot say exactly when anything is going to happen. There is no set order that your code will be executed in. You may have a program of 1000 instructions say but you cannot say what the order of its execution is. What happens depends on what buttons and options the user clicks. Event handlers are called in many different orders. 

If you just consider this idea for a moment it might seem amazing that we can write asynchronous code at all. All those different ways it can be run! Of course the point is that the system is made simpler by the restriction that only one event handler runs at a time and it always runs to completion. The interactions between event handlers are made simpler by this restriction. To put it another way event handlers are atomic, i.e. indivisible, actions. 



Last Updated ( Friday, 04 March 2022 )