The Paradox of Artificial Intelligence
Written by Harry Fairhead   
Friday, 14 April 2023
Article Index
The Paradox of Artificial Intelligence
Intelligence, an operational definition

The emergence of large language models such as GTP4 has revitalized the question of what do we mean by "intelligence" in practical terms. And once we adopt an operational definition does it defeat the whole idea of "artificial intelligence"? The solution might be to realize that intelligence isn't a property, but a relationship.

There is a longstanding problem that people working on artificial intelligence have had to cope with. Whenever you create your latest amazing program that does something that previously only a human could do then the intelligence that you were working on sort of melts away as if it never was.

Look at the early days when it seemed to be right to try to create artificial intelligence by writing programs that could play chess, say. Obviously you have to be intelligent to play chess. It is a subtle game that involves thinking, whatever that is, planning and strategy. It is a game that needs human intelligence and a program that plays chess has to be intelligent. 

 

chess

 

Only of course once you have built a program that solves the chess problem you realise that it is nothing of the sort.

It is clearly a collection of algorithms that seem to do the same job. Often it is said that computers don't play chess like humans and the reason the intelligence vanishes is that there are non-intelligent ways of solving some problems that we solve using intelligence.

That is there are a set of problems that when approached using the wetware of the human brain seem to embody the idea of intelligent thought. However, just because the human brain needs to tackle something in a way that you are happy to label "intelligence" it doesn't mean that this is the only way. Given the superior speed and accuracy of a digital computer and given the different way that its memory works you can solve the chess problem using nothing that looks like intelligence.

That is we can only play chess using something that it is reasonable to label as "thinking" and "intelligence" but given a fast enough computer a perfect game can be played with nothing but a brute force search of all the moves. In practice this isn't possible as no computer is fast enough but the methods that we use are essentially about speeding up the search and making an incomplete search "good enough" for the job.

As is often pointed out

airplanes don't fly by flapping their wings

and it could be added that

helicopters don't fly like planes either...

So some attempts at creating artificial intelligence do nothing of the sort. They simply find more appropriate ways of getting computers to solve the same problems that humans do.

It's not so much artificial intelligence - more advanced computing. 

This, of course, raises the question of whether there can be approaches that do work towards creating true artificial intelligence?

Some people think that the way something is done doesn't make a great deal of difference.

The fact that a computer can play chess or recognize a face is the important thing, and to inquire about the nature of the internal workings before ascribing intelligence is not sensible. After all a human is a finite state machine and so can be emulated by a big state table, a very big state table - so where did that intelligence go?

This is the essence of the chinese room problem. A room accepts questions in chinese posted through a slot in the door and posts back answers in chinese. It looks as if there is an intelligence inside but then I tell you that it contains a person with a big book who looks up the question and copies out the answer. The operator doesn't speak or understand any chinese. The argument is that this is not intelligence because no understanding is involved. The mechanism seems to matter.

It is like trying to capture a butterfly - as soon as you pin it to a display board the (living) butterfly is no more.

One of the problems with not worrying about the way things work is that you end up with all sorts of uncomfortable conclusions. If you do adopt the idea that there is a way of working that is "intelligence captured" then you have to say what this way might be.

 

butterfly

How is it different from digital computation?

You can't just say that it is analog computation and this is different because it is obvious that a digital machine can simulate any analog machine given enough resources. However you try to characterize that which is required to be intelligent it seems that it can be reduced to a program and run on a digital computer. This means that it is a list of instructions that you can look at and understand and well ... and this means that it just doesn't seem to be intelligent. Just as the chess playing is reduced to searches and lookups, whatever you propose as the mechanism for intelligence is reducible to code and hence the language of algorithms applies. Again the intelligence just evaporated.

Some look to copying biological systems such as the brain in the form of say neural networks. In this case it is often the appeal to the idea of emergent behaviour to keep the "intelligence" alive.

Suppose you took a lot of artificial neurons, put them in a box, let the box interact with the world and after some time perhaps you would start to see behaviours that you hadn't programmed. Perhaps you would see such sophisticated behaviours that you would be happy to say that the system had emergent intelligence.

So at long last you have artificial intelligence in a box.

The elusive quantity didn't vanish the moment you completed your program. This is exactly what has happend with large language models such as GPT4. They are just neural networks and yet they seem to exhibit behavior which is convincing to many user that they are intelligent. What is more even experts are starting to claim the they have emergent behavior - the do things that don't seem to be closely related to examples in their training set.

But... suppose you now take the neural network and record its state. That state can be once again expressed as an algorithm. You can now produce a program without the hardware and without the training phase and just use it.

Once again the whole thing is there for you to examine as a program. It is understandable and just like the chess program. The point is that even a very complex neural network is just a finite state machine and surely finite state machines cannot be intelligent?

So where did the intelligence just evaporate to?



Last Updated ( Friday, 14 April 2023 )