|Artificial Intelligence Basics|
Author: N. Gupta, R. Mangla
This slim book promises to teach you the basics of AI as a self-teaching guide. This may be true, but you need to know what topics are covered, or rather what isn't covered. There is nothing about modern machine learning. Nothing about neural networks at all and certainly nothing about more advanced, but fun, ideas such as generative adverserial networks, GANs. This means that the book, enthusiastic though it is, misses out on covering the huge transformation of AI that these techniques have brought about.
Instead what we have is a coverage of topics that were "hot" back in the 1980s.
The book starts off with a look at general/human intelligence and artificial intelligence. We have an introduction ot the Turing test and what AI might be trying to achieve.
Chapter 2 introduces the notion of representation. Something that is far less important today where symbolic AI has more or less been relegated to a sideline. There are also some strange uses of terminology. For example, breaking a problem down into sub-problems is referred to as recursion, which in general it isn't.
Chapter 3 moves on to classical search as you would find in any account of two-person games. This is tackled as an abstract search of a tree - depth first, breadth first and A*.
Chapter 4 introduces real game trees and the extra idea of the minimax principle and some refinements - alpha beta pruning. This is a very short chapter for a big subject.
Chapters 5 and 6 introduce the basics of expert systems or knowlege systems. This is very simple rule-based knowledge representation. What is missing is any discussion of uncertainty which is the huge unsolved problem of expert system application. Chapter 7 is the last that deals with pure AI topics and it is on learning. You might think that here we would be told about classical learning methods - the perceptron, regression, SVMs (support vector machines) and of course, neural networks. The chapter however is nothing like this as it covers different types of symbolic learning in a very general way.
The final three chapters aren't really about AI at all. Chapter 8 is all about Prolog, a language that was once a hot topic, but today is hardly used. That it is ignored is a shame, but this chapter introduces Turbo Prolog, which only runs under DOS and is no longer supported, whereas there are modern version of Prolog. Chapter 9 is a short and pointless introduction to Python with no relevance to AI at all. Chapter 10 is an essay on robotics and other cybernetic machines. This is mostly history and might be considered fun, but it isn't really "basic AI".
This is a book that covers AI as it was in the 1980s. It is a trip down memory lane if you were there at the time. It isn't that a reader shouldn't know something about the symbolic approach to AI, but just presenting this as the totality of modern AI misses out most of the exciting and successful things that have happened in the 21st century. Even the treatment of the topics it does cover are shallow, idiosyncratic and sometimes misleading.
If you want to read something on AI that is more like history than current practice you might get something out of this, but I'd look elsewhere if you want a modern account of the exciting breakthroughs in the subject.
|Last Updated ( Tuesday, 23 June 2020 )|