|Jay Forrester and Whirlwind|
|Written by Historian|
|Friday, 18 November 2016|
Page 2 of 2
Forrester realised that storage was the critical problem. The whole project depended on finding a more reliable and more economical method of storage.
He started to think about ways of making a 2D or 3D form of storage rather than the one-dimensional recirculating method of storage represented by the delay line and Williams tube. After spending much time thinking about the problem Forrester encountered an article on the use of magnetic materials as amplifiers. He ordered some of the material and built an array that passed current through rings of the material to magnetise it in one of two directions.
This worked but it was too slow. Then the breakthrough! Forrester came up with a scheme that involved threading rings of the magnetic material on an x-y grid of wires. Each ring or core was threaded onto a unique pair of x-y wires. A third read/write wire was threaded through all of the cores. To read or write a bit half of the current needed to change the magnetisation of a core was placed on one of the x wires and on one of the y wires. Only the core at the intersection of the two wires was subject to a current sufficient to change its polarity. This enabled a direct access to each bit in the array.
The original patent for core memory
Today this sounds like an obvious method but then it was quite new and Forrester had doubts that it would work. Perhaps the repeated exposure to half the current needed to change the polarity would eventually cause a slow degradation in the state of the core. It didn't and coincident current core memory worked!
A special test bed computer was built just to verify the principle. Then in 1953 Whirlwind was equipped with a new core memory that doubled its speed, improved its reliability and made it cheaper to keep running.
Close up of part of a Whirlwind core
Core memory was just what the developing computer industry need - you would think that the world would beat a path to Forrester's door. It didn't at first, but as Forrester himself said:
"it took three or four years for the industry accepted the notion and then it took the next seven years to convince them that they hadn't all thought of it first!"
Core memory may have been slow to catch on but when it did it wiped out all other forms of memory and survived well into the era of the integrated circuit. Many production machines relied on it to provide megabytes of fast, relatively cheap and highly reliable RAM. The core memory industry always managed to reduce the size, cost and heat generation to stay ahead of the competition until well into the 70s. Core storage was also the foundation of the minicomputer revolution. In fact the microcomputer was the first generation that didn't rely on core storage.
Even today the word "core" is still used in computing. Programmers talk of "a core dump" to mean a diagnostic printout of the contents of memory. The term was first used literally to refer to a dumping the contents of core memory to a printer.
16K of core memory
After inventing core storage and the fastest computer available you might expect Forrester to have looked forward to building more, better and faster machines - perhaps even having started the Forrester Computer Corporation or something.
At first it looked as if Whirlwind would be the last of its kind. No-one was interested in a machine fast enough to do real time computations - except for the military. The one big real time computation problem of the age was tracking aircraft, enemy aircraft that is.
During the World War II radar had been used in conjunction with manual tracking. Computers had even been used in combination with radar to aim guns - but they were analog computers. Now the idea was to use a digital computer to process and track the air traffic movements over large areas. It sounded like a good idea but the military weren't prepared to take the risk on such a huge project.
The Navy were about to to scrap Whirlwind when the cold war broke out. It became very clear that the USSR could send a warhead over the pole and into the US before any one had time to notice it. A system based on the Whirlwind was the only reasonable solution to the defence problem. SAGE - Semi-Automatic Ground Environment system - was to be a country-wide system. It would be built in deep caverns carved into mountains and be the source of a mythology of secret super computers in charge of the world's arsenal of weapons. (See, literally, the film of the Forbin Project- the side bar contains more information - if you want to know more!)
It would also pioneer many of the techniques that we now take for granted - interactive visual displays, light guns, tandem operation for near zero down time - and it would also become some of the last valve based computing machinery in use. The SAGE system was still being modernised in the early 80s!
Jay Forrester was appointed director of the SAGE project and his design and management skills helped make it all work. But the story of SAGE isn't the story of one man and it's too fascinating to skip over so quickly.
In 1956 Forrester decided that the SAGE project was running smoothly enough to do without him. He moved to the MIT School of Management to work on computer models of social systems.He became Professor of System Dynamics - a subject he virtually invented and a logical move if you consider it the application of servo theory to a broader subject matter!
Even here there were computer oriented spinoffs from his work. His study of urban dynamics is credited as providing the theory needed to invent games such as SimCity.
Forrester is nowadays best known for system dynamics and his wider work in economics, but anyone interested in computer history will, above all, thank him for inventing core memory - something simple, elegant and absolutely essential to the growth of the computer industry and computer science.
|Last Updated ( Friday, 18 November 2016 )|