Marvin Minsky - AI Visionary
Written by Sue Gee   
Article Index
Marvin Minsky - AI Visionary
Perceptron to Robotics
Legacy

In the early 1950s computers were comparatively crude devices with far less computing power than a desktop PC. All the more surprising then that some users were already thinking about ways of making the machines emulate human intelligence.

For some, this was simply a misunderstanding.

Of course the computer was just a gigantic mechanical brain and it could think. It might even be superior to us. This really was the naive view current at the invention of the machine and you still encounter it today, although thankfully less often.

While you can forgive the uninitiated for believing that the huge machines were capable of intelligent thought, what about the people who were closer to them?

Surely Alan Turing can’t have been serious about a machine passing the Turing test and successfully mimicking a human in only a few years?

Remember, at the time Turing was thinking about the problem computers filled a large room and ran slower than a first generation PC. How is it possible that the early computer creators could overrate their machines to this extent?

It was a very common mistake then and it is almost as common now.

 

Banner

 

The effect on the best workers in the AI field seems to be a series of highs, when the goal is in sight, and very deep lows, when they are convinced that no progress has been made. One of the most important of the early AI researchers, Marvin Minsky, created both lows and highs in the history of Artificial Intelligence.

 

minsky3

Marvin Lee Minsky 
(August 9, 1927 - January 24, 2016)

 

Marvin Minsky’s father was an ophthalmologist and the family home was full of lenses and prisms. Marvin took delight in taking the instruments apart and finding out how they worked. His father was often left with the task of putting them back together again.

Later (1950) Marvin went to Harvard and took courses in mathematics, neurophysiology and psychology. He graduated with a BA in mathematics but he was clearly no ordinary mathematician.

Then he moved to Princeton to work on a PhD in mathematics, but not the sort of math that was current at the time. Minksy decided to work on connectionist theories of the brain. He studied everything he could on the physiology and anatomy of the nervous system. He wanted to know how the brain learns but there were too many gaps. He thought that if he could work out how the neurons were connected then he could reverse engineer what they actually do.

He was surprised to discover that the connection schemes had never been properly mapped even for small parts of the brain. Many people, computer experts in particular, are under the impression that we somehow have a “wiring” diagram for the brain, or at least some useful portion of it. The truth is that even today the human brain is not something that you can order a circuit diagram of from your local neurobiology shop. We have recently managed to work out the wiring of some simple insects and small subsystems but as yet nothing major.

At the time Minsky was looking at the problem almost nothing about the functioning of neurons in groups was known. It was also difficult to see how it could be discovered using the techniques of the time. Neural circuits are inherently 3D and the optical equipment of the time could only look at 2D slices. This was a problem that was to occupy Minsky for some time.

Neural networks

Despite the lack of a wiring diagram it was possible to guess at various principles behind the functioning of collections of neurons. Given that the fact that the brain was made up of neurons was something that was only discovered in 1911 by Ramon y Cajal, it is remarkable that by 1943 people were speculating on how neural networks might function.

Some of the earliest work was done by McCulloch and Pitts who showed how idealised neurons could be put together to form circuits that performed simple logic functions. This was such an influential idea that Von Neumann even made use of neuronal delay logic elements in ENIAC and many later pioneering computers made use of neuron-like circuit elements.

At this time it really did seem that the structure of the brain had much to tell us about ordinary programmable computers, let alone intelligent learning machines. Normally we think of computers as being the product of hard engineering, electronics, Boolean logic, flow diagrams and yet in the earliest days the pioneers actually thought there was a direct connection between what they were doing and the structure of the brain.

Minsky must have been strongly influenced by this feeling that computers and brains were the same sort of thing because his thesis was on what we now call “neural networks”. In those days you didn’t simulate such machines using general purpose computers – you built them using whatever electronics came to hand.

In 1951 Minsky built a large machine, the first randomly wired neural network learning machine (called SNARC, for Stochastic Neural-Analog Reinforcement Computer), based on the reinforcement of simulated synaptic transmission coefficients.

After getting his PhD in 1954 he was lucky enough to be offered a Harvard Fellowship. He had started to think about alternative approaches to AI but he was still troubled by the inability to see the neural structures that would tell him so much about how the brain is organised. So he invented a new type of microscope – the confocal scanning microscope.

Because the basic operation of the microscope was electronic he also attempted some of the first image processing using a computer – the SEAC at the Bureau of Standards. Not with much success, however, because the memory wasn’t large enough to hold a detailed image and process it.

MIT CSAIL 

In 1959 Minsky and John McCarthy founded  the MIT Computer Science and Artificial Intelligence Laboratory, which today remains one of the main centres of AI research in the world. The lab attracted some of the most talented people in computer science and AI. Minsky continued to work on neural network schemes but increasingly his ideas shifted to the symbolic approach to AI and robotics in particular.

The difference between the two approaches is subtle but essentially the neural network approach assumes that the problem really is to build something that can learn and then train it to do what you want, whereas the symbolic approach attempts to program the solution from the word go.

In the early days of AI the neural network approach seemed to be having more success. Indeed there was almost a hysteria surrounding the development of one particular type of neural network – the perceptron.

<ASIN:0262631113>

<ASIN:0131655639>

<ASIN:0465026567>

<ASIN: 0743276647>



Last Updated ( Thursday, 10 August 2023 )