|IBM Debater Argues Like A Human - But How?|
|Written by Mike James|
|Wednesday, 20 June 2018|
IBM is the outlier when it comes to AI. Most other companies are taking the neural network path, but IBM is very much into old-fashioned engineering. The new system has neural networks involved, but they are not the main tool in the kit.
Project Debater is the next in IBM's line of AI systems following on from Watson and Deep Blue. It is trying to master human speech interaction so that IBM can offer voice-based services such as tech support, medical consultations and so on. In this case the idea is for the machine to engage in a formal debate using voice input and output. The topic of the debate isn't set beforehand and the human and the computer have to construct an argument.
To show how far the system has come IBM organized a small public debate. The format was to prepare a four-minute opening statement, followed by a four-minute rebuttal and a two-minute summary. The first topic was “we should subsidize space exploration”, followed by “we should increase the use of telemedicine”. Votes cast by the small audience resulted in the score: Machine 1: Humans 1.
The speech synthesis and recognition were very smooth - the arguments occasionally not so convincing, but still the overall effect was impressive. There was a very large amount of "stagecraft" involved. A tall black panel with a very large blue animated mouth dominated the puny human debaters and there were moments where you have to think that maybe a joke or two were pre-loaded and would have been produced more or less irrespective of the topic.
It is clear that, just as in the case of the chess-playing Deep Blue and the game contestant Watson, IBM wants to put on good show and anything that makes the system look more intelligent or more sci-fi is fair game.
What is really interesting is that this is an old-school engineering approach to AI. The term isn't intended to be derogatory, just that there are two broad approaches to AI. The first is almost magical in that we let the system figure out how to do the job. This is what neural networks do - we train them to get the response we want, but we don't do detailed setting up and implementation. The second approach is to work out what is needed and implement it using code. This minimizes the training phase because you engineer the code to analyse the spoken word, say.
For a long time the engineering approach to AI was the most successful, but then computer power grew to the point where neural networks could learn enough to do real tasks and eclipsed the alternative approach. The problem with the learning approach is that you don't really know how the system works - this makes it all the more magical when it does work, but presents a bigger worry that it might not work. The pure engineering approach is generally deterministic in that an engineer would have a good idea what the system was about to do given the inputs.
IBM is keeping the overall design of its AI fairly secret, but it has published lots of papers on the sub-systems that make up Debater. There are neural networks included, but they are mostly used as instruments to measure things such as performing sentiment analysis. In the list of techniques "Argument Mining" is top and this seems to be a complicated mix of Natural Language Processing Algorithms - detecting claims in documents. detecting evidence, negating claims and so on. There are others - Stance Classification and Sentiment Analysis are about working out if an argument is for or against.
Where the neural networks do seem to be heavily used is in speech recognition and speech production. This is reasonable as the data are far less easy to analyse and understand and hence it is difficult to engineer a solution.
To make the contrast between learning and engineering, how would a full neural network design look? The straightforward approach would be to build an end-to-end neural network solution, i.e. get the inputs, the speech, and learn to map them to the outputs - the speech. This is unlikely to work however as there are too many problem domains. A better approach, and one that is only now becoming possible because of the hardware now at our disposal, is to build a number of co-operating networks. One for speech recognition, one for speech production, one for understanding the words and another for arguing a case. You can see that this is a tough problem.
Engineered AI like Debater works well, but it usually lacks a facility for deep generalization. Neural networks learn the patterns and structure in the data and we hope that any new cases also follow the same patterns and structure present in the sample. If you train a neural network to classify cats and you show it a dog then you will get a classification that corresponds to a close-looking cat. If you engineer a vision system to recognize cats then when you show it a dog the system breaks and you generally get an error message. This is a little unfair in that clever design can avoid this, but the point is that an engineered system can only deal with situations that the engineer has had the wit to recognize or imagine. As soon as reality goes outside the box, there is no generalization. This is both the advantage and disadvantage of engineered AI systems.
What is clear is that IBM is bringing the same advantages of "big hardware" that made neural networks practical to its AI systems. Perhaps both approaches have the potential to work if you throw enough hardware at them.
or email your comment to: email@example.com
|Last Updated ( Wednesday, 20 June 2018 )|