|Machine Learning For Dummies, 2e (Wiley)|
Author: John Paul Mueller
but then dummies probably aren't equipped to understand such complications. OK I promise to stop going on about why any book should ever be called "dummies" anything and why any reader would want to be seen reading a book that labels them as a dummy. I do have a sense of humour but ... Machine Learning for Dummies is about as far out as it gets with the exception of "Brain Surgery for Dummies" and similar.
So what does a dummy expect to learn about machine learning?
The good news is that some 45 pages of Part 1 are all about general ideas that any dummy might manage to understand. It covers all of the basic ideas of what machine learning might be used for and some of the history. It also does a good job of explaining the various approaches, although how much these will make sense to a beginner is difficult to guess at. It is probably important, however, to know that there are different approaches - machine learning isn't a single technique.
Part 2 is where you can get started in a hands-on way - but you might not have been expecting this. It explains how to install and use Python and Google Colab. The Python implementation is Anaconda, which is not one I like, but the emphasis is on Jupiter notebooks which I do like. This is 40 detailed pages of technical stuff that has nothing to do with machine learning. Are you going to cope with Python after this introduction? No you are not, even if you aren't a dummy. It would have been better to make "can program in Python" a prerequisite.
Part 3 is on math and mostly matrices. If you don't like math then you aren't going to like part 3, but then what are you doing here at all - machine learning is math-based. We also have a few pages on probability and statistics, functions and gradients, aka calculus. Finally Chapter 10 gets us our first machine learning method at page 175 and its the perceptron learning algorithm which is illustrative but not really useful. Next decision trees, naive Bayes, k nearest neighbour, regression, PCA and so on.
We meet neural networks towards the end of Part 4, in Chapter 14 and use TensorFlow and Keras to try it out. For such a complex topic, 30 pages isn't enough. Chapter 15 introduces support vector machines - a technique that still deserves coverage but one that isn't used as much any more. Finally we learn the advantages of having lots of learners and averaging their output.
The penultimate part of the book - Part 5 Applying Learning To Real Problems - takes us though classifying images, opinions and recommenders. The final part consists of three chapters each with ten ways to improve learning models, ethical data usage and packages to master.
Overall this is quite a good book, but it isn't for dummies. It is too hands-on and it covers far too much ground. If you have a grasp of Python and are not afraid of math then you might find it suits your needs. It can't avoid the math but it also doesn't go in for unnecessary math. As to the quality of the explanations - they are all good and straight talking without being to smart or overly friendly. However, they could do with a lot more illustrations which generally make the math and the ideas simpler to understand.
You also need to be aware that this is a book that takes machine learning to mean something more than just neural networks, which is the headline-making technique of today. This isn't a problem as long as you know what you are getting.
For recommendations of books on Machine Learning see AI Books To Inspire You in our Programmer's Bookshelf section.
To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.
|Last Updated ( Friday, 14 January 2022 )|