Author: Julian Togelius
This is a light-hearted and non-technical exploration of the way games and AI fit together. However, if you know something about AI then you won't find much that is new here. In fact your best hope of getting some fun out of the book is to try to think up controvertial alternative viewpoints and argue them out in your head against what is being said!
The book starts with a brief history of computer games and AI ending with a few comments on Alpha Go and the question "is it intelligent?"
This leads on to the next chapter where the question of what you do when you play a game is considered. This is mostly a common sense list of what game play is all about, but it raises the next question of what intelligence is and this is the subject of Chapter 3. Here we meet the Turing Test and the problems that it causes in defining intelligence. Eventually we are led to the solution that intelligence is problem solving, but without the domain of discourse being limited. The problem with the Turing test is clearly that a non-intelligent chatbot can get some way to passing it because it is restricted in its domain to chit-chat. The conclusion at the end of the chapter is that the question "can machines think" is too meaningless to have an answer. I think I prefer the Edsger W. Dijkstra quote:
" The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim."
Which, sadly, is not included in the text. The point here is that this is philosophy. Once you return to engineering the only question you can ask is "does it do the job" which is what the Turing test, imperfectly formulated though it is was, intended to decide.
Chapter 4 continues to ask the same question, but this time in the form "do video games have AI?" The answer is of course they don't. One thing you can be sure of is that if anything has AI then everything will have AI. It is far too important to be left to video games, even though these are a good testing ground.
Chapter 5: Growing a mind is a little more technical and sort of explains the A* algorithm. Of course, it doesn't get very technical and it doesn't explain why something that sounds so trivial has some interesting properties. This is the chapter where the author's bias towards the genetic algorithm become clear. It isn't surprising as the GA has been the main focus of his research. After describing natural selection the chapter goes on to explain how genetic principles can be applied to learning. As an alternative to the GA, reinforcement learning in the form of Q learning is described.
At the end of the chapter it is suggested that evolutionary learning gives rise to big structural changes and Q learning is about smaller improvements. Again I find myself in disagreement. Modern evolutionary theory is much more complicated that this account suggests. Gene-based evolution isn't as free as it might seem as the basic body plan is well established and difficult to make radical changes to. Things evolve into increasingly specialized units which find it difficult to back out and start again. Evolving into a cul de sac is one of the drawbacks of evolution and why organisms go extinct. Given a sufficiently evolved mechanism, reinforcement learning is much more likely to make a radical adaptation possible. I am unlikely to evolve a third arm, but I am quite capable of designing one if the reward was high enough.
Chapter 6 asks if games learn from you. Of course they don't if you mean "learn" in the sense of general intelligence. They can, and do, modify their characteristics, however.
Chapter 7 is titled "Automating creativity" which takes issue with the argument that computers can't do anything that we haven't told them to do and so cannot be creative. The counter argument seems to be that complex systems do things that we don't expect and learning systems can learn things we don't expect. If creativity is about the unexpected this is a reasonable argument, but again I don't think creativity is just this. Creativity is often confused with the art of found objects but in most cases this isn't creativity but curating. You can look at the art that a machine produces and pick the best and marvel at its creativity, but in fact it is the person doing the picking displaying taste rather than creativity. So can computers be creative? The answer would be easy if we could say with precision when a human was creative.
The final three chapters are more about games design in the light of AI approaches than about AI. The final chapter expresses the opinion that games are the best place to grow and develop AI. I'm not sure that this is true. Games provide a tame world that AI can be benchmarked on. To know that a neural network can learn to play Go is compelling evidence that we are on the right track, but it isn't intrinsically useful in any wider sense. There are now so many arenas - medical, home assistants, robotics, cashierless stores, warehouse picking, robot soccer and so on - that bring physical challenges to AI that are simply absent from any sort of game play. I'm not saying that AI in games isn't a thing for the future - it's just not as important as this book claims.
This is a fun book to read, well written, quirky and friendly, but it covers the subject in a conventional way without exploring many of the less trodden paths. It would best suit a reader who doesn't know much about AI and who hasn't thought about such things.
There are also many missing ideas - differential machines, GANs, and adversarial examples to name just three. Even on its pet topics, such as the genetic algorithm, it doesn't explore the really mind-boggling question - why is the genetic algorithm so unreasonably effective?
The biggest problem with the book is that it tackles aspects of AI that are more like philosophy than technology. The key point is that if you know your subject then there is nothing much new here for you and if you don't then you are getting a fairly biased view. After all the book is about AI and games, not AI in general.
|Last Updated ( Tuesday, 09 July 2019 )|