|Written by Mike James|
|Thursday, 19 December 2019|
Page 1 of 2
Sequential storage played an important role in computing history and gave rise to some very specialised hardware and methods. You might think that it is no longer relevant and we can forget the lessons of the past, but sequential storage is still with us and its algorithms are worth knowing.
What Programmers Know
* Recently revised
Today we take it for granted that we can store data on Tera-byte solid state disk drives but the whole problem of high capacity storage has caused problems for computer designers and users since the early days. While main memory, or RAM, has long been the glamour end of the business with speed being its main concern secondary storage has long been the real workhorse of computing.
It has to be admitted that you don’t really need to introduce secondary storage as a theoretical concept. You can appreciate the workings of a computer without ever invoking its name. However, sometimes secondary storage simply fits in with what you want to do more neatly than RAM.
What exactly is secondary storage?
The rather obvious answer is that secondary storage is anything that isn’t primary storage – and strangely enough this is about as good a definition as you can get. Primary storage is the memory that the CPU looks to for its program and the data it is operating on. Most often primary storage is RAM but other memory architectures are possible – a stack for example. All you can say about secondary storage is that it is generally slower and cheaper than primary storage and after that things get complicated.
It is possible, however, to distinguish two very general classes of secondary storage – sequential and direct access. Sequential access was probably invented first and it has some claim to a place in the theory of computer science so let’s start with it.
Sequential storage is very easy to understand but it tends to cause programmers who underestimate it lots of problems.
The archetypal sequential storage device is the punch paper tape. Binary numbers can be stored on the tape by punching a row of holes – a hole means one and no hole means a zero. Paper tape has been in use since the earliest computers and even before this it was used as a message storage medium for teletype machines.
You might even say that Babbage invented an early version of it. His computer was supposed to have used punched cards strung together to make a continuous chain. In this case the cards were actually being used more like primary storage than secondary storage because the program was directly executed from the cards. My guess is he would have recognised paper tape but probably not the way in which it was used. Paper tape storage was rarely used as a way of running a program directly, as faster main memory was available. The paper tape provided a library of programs that could be loaded into the machine and run as required.
As a historical aside, it is interesting to note that special purpose machines such as the code breaking Colossus used at Bletchly Park did use paper tape as part of their computing mechanism. In this case speed really was important and mechanical contraptions of pulleys and tensioners were invented that would run a paper tape loop past a photocell at many miles per hour. Also just before the Second World War Konrad Zuse used old 35mm film stock with holes punched in it for his early computer because it was easier to obtain than paper tape.
Returning to the discussion of how things work, the key thing about paper tape is that provides “sequential” storage. That is, you can just go to any point in the tape and read or write something. The data is always accessed in the same one-after-the other order. Some paper tape readers had the ability to “back up” one but none, to my knowledge, ever attempted to move to a particular location and read what was punched, let alone try to move you to a blank piece of tape and then punch some holes! Paper tape is inherently “sequential” and so is a whole range of secondary storage – because if you look carefully enough it is just a paper tape in disguise!
“Tape” storage really took off when magnetic recording became a practical proposition. It may all seem obvious to us that magnetic recording is workable but when the inventor of magnetic recording, Valdemar Poulsen, tried to get a patent on the idea in 1898 he was told that it was against all the known laws of magnetics. Even so it still worked!
The strange truth is that the first use of magnetic recording in computing wasn’t tape and it wasn’t used for secondary storage. A drum coated with magnetic material was spun at high speed and data was written and read by a set of fixed read write heads. Only a relatively small amount of data could be stored on each track but it could be accessed fast enough to make it suitable as primary memory or as cache to a smaller amount of faster memory. Slowly such drums died out because faster cheaper methods of building primary and cache memory were invented. The magnetic tape, on the other hand, went from strength to strength and it is still in use doing jobs that no other technology can deal with.
It isn’t at all clear which of the early machines first used magnetic tape storage but they were very quickly standard issue and the gentle rocking to and fro of large spools of tape became as much a symbol of the computer as the rows of flashing lights. The Harvard Mark II (1947) and the Univac (1951) used magnetic tape but it took some time for the idea to become completely accepted, even for business machines. The LEO I, for example, didn’t have any sort of backing store and used nothing but paper tape. It wasn’t until LEO II (1957) that tape drives were incorporated.
Early tape drives in use with the UNIVAC circa 1952
|Last Updated ( Friday, 27 December 2019 )|