Page 2 of 3
Despite the lack of a wiring diagram it was possible to guess at various principles behind the functioning of collections of neurons. Given that the fact that the brain was made up of neurons was something that was only discovered in 1911 by Ramon y Cajal, it is remarkable that by 1943 people were speculating on how neural networks might function.
Some of the earliest work was done by McCulloch and Pitts who showed how idealised neurons could be put together to form circuits that performed simple logic functions. This was such an influential idea that Von Neumann even made use of neuronal delay logic elements in ENIAC and many later pioneering computers made use of neuron-like circuit elements.
At this time it really did seem that the structure of the brain had much to tell us about ordinary programmable computers, let alone intelligent learning machines. Normally we think of computers as being the product of hard engineering, electronics, Boolean logic, flow diagrams and yet in the earliest days the pioneers actually thought there was a direct connection between what they were doing and the structure of the brain.
Minsky must have been strongly influenced by this feeling that computers and brains were the same sort of thing because his thesis was on what we now call “neural networks”. In those days you didn’t simulate such machines using general purpose computers – you built them using whatever electronics came to hand.
In 1951 Minsky built a large machine, the first randomly wired neural network learning machine (called SNARC, for Stochastic Neural-Analog Reinforcement Computer), based on the reinforcement of simulated synaptic transmission coefficients.
After getting his PhD in 1954 he was lucky enough to be offered a Harvard Fellowship. He had started to think about alternative approaches to AI but he was still troubled by the inability to see the neural structures that would tell him so much about how the brain is organised. So he invented a new type of microscope – the confocal scanning microscope. Because the basic operation of the microscope was electronic he also attempted some of the first image processing using a computer – the SEAC at the Bureau of Standards. Not with much success, however, because the memory wasn’t large enough to hold a detailed image and process it.
MIT AI Lab
In 1959 Minsky and John McCarthy founded what became the MIT Artificial Intelligence Laboratory which, in time became one of the main centres of AI research in the world. The lab attracted some of the most talented people in computer science and AI. Minsky continued to work on neural network schemes but increasingly his ideas shifted to the symbolic approach to AI and robotics in particular.
The difference between the two approaches is subtle but essentially the neural network approach assumes that the problem really is to build something that can learn and then train it to do what you want, whereas the symbolic approach attempts to program the solution from the word go.
In the early days of AI the neural network approach seemed to be having more success. Indeed there was almost a hysteria surrounding the development of one particular type of neural network – the perceptron.
Rosenblatt invented the single-neuron perceptron in 1958 and went on to prove some very powerful theorems about what it could learn. These theorems were a sort of guarantee that if something was learnable then the perceptron would learn it. The AI community at the time oversold the idea with demonstrations and outlandish claims for what could be done with one single perceptron.
Then the bubble burst. Minsky had met Seymour Papert and they were both thinking about the problem of working out exactly what a perceptron could do. The shocking truth that was revealed in the book that they wrote together “Perceptrons” was that there really were some very simple things that a perceptron cannot learn. In particular concepts such as “odd” and “even” are beyond a perceptron, no matter how big it is or how long you give it to learn.
The perceptron book effectively discouraged any further work in the field simply because no funding organisation would give grants to crackpot AI research. For some 10 years, until the start of the 80s, the neural network approach to AI was effectively dead. A few places, mainly psychology labs and neurology labs still worked on the problem but progress was very slow. What started the revival was the discovery that multi-layer networks could be trained and they could solve the problems that Minsky and Papert had proved impossible for a perceptron.