Page 3 of 3
Symbols with meaning
A diagram showing the rules for an expert system that accepts or rejects expense claims!
The shallowness of the Eliza type program, and the whole current approach to the Turing Test, doesn’t mean that we can’t tackle the problem of meaning using symbols and rules. The first real practical success of AI was the expert system.
Expert systems can diagnose illness, find oil, fix complex systems and so on. As with all AI products at first they appear magical and when you find out how they work they seem trivial – remember that this is not a criticism but a fact of AI methods!
Expert systems work by using a collection of rules of the form
IF something THEN something
The program gets some information and then searches though its database for rules that match. For example, a rule might be
IF red spots THEN measles
The program might ask
“Have you any red spots?”
and if the answer is “Yes” it would conclude that you had measles.
I told you it was simple! In practice the rules don’t always get you to the solution in one step and one rule’s conclusion could very well trigger another and so on. One of the advantages of the expert system approach is that knowledge about a subject can be collected as small simple rules but the resulting rule base can still be used to deduce more complicated outcomes. You can also have rules that include a measure of how strongly the conclusion follows from the information.
All in all expert systems are actually very useful and another good example of weak AI. However there are people how claim that this method can be extended to produce strong AI – a thinking machine!
The Cyc project for example, aims to build a rule database that is so complete and all encompassing that it will become intelligent in the sense that it will be able to discuss topics and eventually reach the limits of human knowledge. Once it is at the limits of human knowledge it will push on and invent new knowledge.
Of course Cyc is more than just an expert system. The types of rules that it uses have been extended to include logical expressions. The program is also able to modify and add to the rule database and every night it is left to “think over” the day’s input. In the morning researchers checkout the new rules it has created to see if they are reasonable. This is a very ambitious project and many claim that the rule base will become hopelessly inconsistent long before it is complete. At the moment the project has served more to point out the difficulties of this approach than it has demonstrated that it works.
There are many more side tracks of the symbolic or engineering approach to AI but the time has come to consider the only other real alternative – connectionism.
One of the main characteristics of human intelligence is the ability to learn. Animals also have this ability but many of the symbolic programs can’t learn without the help of a human. Very early on in the history of AI people looked specifically at the ability to learn and tried to build learning machines. The theory was that if you could build a machine that could learn a little more than you told it you could “bootstrap” your way to a full intelligence equal to or better than a human. After all this is how natural selection worked to create organisms that could learn more and more.
One of the first attempts at creating a learning machine was the Perceptron. In the early days it was a machine but today it would be more easily created by writing a program. The simplest perceptron has a number of inputs and a single output and can be thought of as a model of a neuron. Brains are made of neurons, which can be thought of as the basic building block of biological computers. You might think that a good way of building a learning machine would be to reverse-engineer a brain, but this has proved very difficult. We still are not entirely certain how neurons work to create even simple neural circuits but this doesn’t stop us trying to build and use neural networks!
The basic behaviour of a neuron is that it receives inputs from a number of other neurons and when the input stimulus reaches a particular level the neuron “fires”, sending inputs on to the neurons it is connected to.
The perceptron works in roughly the same way. When the signals on its inputs reach a set level its output goes from low to high. In the early days we only knew how to teach a single perceptron to recognise a set of inputs. The inputs were applied and if the perceptron didn’t fire when it was supposed to it was adjusted. This was repeated until the perceptron nearly always fired when it was supposed to. This may sound simple but it was the first understandable and analysable learning algorithm. In fact it was its analysability that caused it problems. The perceptron was used in many practical demonstrations where it distinguished between different images, different sound, controlled robot arms and so on but then it all went horribly wrong.
This is the book that killed research into neural networks for 15 years!
Its downfall was a book by Marvin Minsky and Seymour Papert that analysed what a perceptron could do and it wasn’t a lot. There were just too many easy things that a perceptron couldn’t learn and this caused AI researchers to abandon it for 15 years or more. During this time the engineering symbolic approach dominated AI and the connectionist learning approach was thought to be the province of the crank.
Then two research groups simultaneously discovered how to train networks of perceptrons. These networks are more generally called “neural nets” and the method is called “back propagation”.
While a single neuron/perceptron cannot learn everything - a neural network can learn any pattern you care to present to it. The whole basis of the rejection of the connectionist approach to AI was false. It wasn’t that perceptrons were useless, you just needed more than one of them!
Unfortunately as with most AI re-births this one grew too quickly and more than could be achieved was being promised for the new neural network approach. Now things have settled down the neural net is just one of the tools available for learning to solve problems and we are making steady progress in understanding it all.
Some of the connectionist school are working on the weak AI problem and applying their ideas to replace human intelligence in limited ways. Currently the symbolic approach still rules in some areas. There are no chess-playing neural networks that get very far but there is a neural net that can beat most backgammon players!
There is no single approach that can claim to be best for all AI problems but there is little doubt that the connectionist approach is the one that favours the strong AI problem. The idea that you can take a few million artificial neurons, throw them together, let them self organise and so produce intelligence is a way of avoiding the repeated disappointment at discovering exactly how an AI method works! Perhaps neural nets are the only possible route to the magic of intelligence and a conscious machine.