Principles Of Execution - The CPU
Written by Harry Fairhead   
Thursday, 10 January 2019
Article Index
Principles Of Execution - The CPU
Fetch
The Op Code
Jumps

The real complexity of any computer system resides in the processor, but do you know how it works? I mean how it really works? How does the code that you write turn into something that does something? When you know how, it's not magic - just a matter of "fetch" and "execute".

What Programmers Know

knowcover

Contents

  1. The Computer - What's The Big Idea?
  2. The Memory Principle - Computer Memory and Pigeonholes
  3. Principles of Execution - The CPU
  4. The Essence Of Programming
  5. Variables - Scope, Lifetime And More
  6. Binary Arithmetic
  7. Hexadecimal
  8. Binary - Negative Numbers
  9. Floating Point Numbers
  10. Inside the Computer - Addressing
  11. The Mod Function
  12. Recursion
  13. The Lost Art Of The Storage Mapping Function *
  14. Hashing - The Greatest Idea In Programming
  15. XOR - The Magic Swap
  16. Programmer's Introduction to XML
  17. From Data To Objects*
  18. Stacks And Trees
  19. The LIFO Stack - A Gentle Guide
  20. Data Structures - Trees
  21. Inside Random Numbers
  22. The Monte Carlo Method
  23. Cache Memory And The Caching Principle
  24. Information Theory
  25. Coding Theory
  26. Data Compression The Dictionary Way
  27. Dates Are Difficult
  28. Magic of Merging
  29. Power of Operators
  30. The Heart Of A Compiler
  31. The Fundamentals of Pointers
  32. Public Key Encryption
  33. Quick Median
  34. Functional And Dysfunctional Programming*

* Recently revised

So far we have looked at the overall workings of a computer and specifically the memory principle. Memories are devices that when given one input automatically produce an associated output - reading or will automatically store the output if it is also presented. The memory principle and memory mechanisms are fascinating but we need more we need something that can execute instructions making use of the memory - we need a processor.

The processor is quite another level of difficulty.

The processor is the computer

There really is no question of the validity of this assertion.

If you don’t believe me try running a program written for a PC on a Mac.

The point is that computers with different processors are different – computers with the same processor are just faster or slower.

The details of memory management and caching my be impressive but the real complexity of any computer system resides in the processor and it is time to look more closely at how it does what it does.

Even if you think you already know you still might find the explanation interesting. The reason is that many books and courses don’t really tell you the whole story. They stop short and leave you with a sense that the processor is somehow magic even though you know the rough outline of how it all should work.

 

Banner


Elsewhere we have  discovered that what makes a computer is the intimate connection between processor and memory. When the processor places an address on the address bus a particular memory location is selected and either stores the data on the data bus or places the data stored in the location on the data bus.

Notice that this isn't magic. There isn't a little humanoid that goes and finds a particular memory location by address and then retrieves the contents for the CPU. The action is as automatic as a key in a lock. The CPU puts the address on the address bus and this selects and activates a particular memory location. The read/write line sets the memory location's behavior and it either places its content on the data bus or it "latches" or stores the contents of the data bus.

All automatic.

Program Counter

This might well be the major operating principle of a computer but it leaves out what the processor actually “does” with the data.

After all it is called a “processor” so presumably it doesn’t just store and retrieve bit patterns. We already know how binary patterns can be used to represent numbers and we know how Boolean logic can be used to manipulate them – with addition and subtraction.

But this is only part of what goes on. When you first start to consider the workings of the processor it is usually arithmetic that the focus falls on. The reason is that we often, mistakenly, think of computers as “computers” but for the vast majority of the time a computer is actually doing something other than arithmetic.

Once you start looking a little more closely the magic seems to be more to do with how this lump of silicon, or whatever it is made from, can obey the commands in a program. How on earth does it look at the next instruction in a program, work out what it means and then arrange the immutable hardware to do it? Software may be soft but hardware is hard and it doesn't change depending on what the instruction wants it to do.

Once again there is a tendency to think of a little humanoid sitting where the processor is waiting for the next instruction to appear and then doing whatever it commands. This is, of course not how it happens and it is all just as automatic as the memory storage and retrieval.

The “trick” that the processor performs seems very complex but it is all based on building the complex from the simple and the very regular – but isn’t this always the principle when it comes to computers?

The first thing a processor needs is some way of keeping track of where it has reached in the program. This is using a single internal memory location, usually called the “Program Counter” or PC – and it doesn't count programs! All internal memory locations within the processor are called “registers” for historical reasons and to indicate that they are generally just a little more than simple memory locations. For example, the PC register has two operations that it can perform. It can be initialized to a set value and it can be incremented, i.e. it can add one to the value stored in it.



Last Updated ( Thursday, 10 January 2019 )