|TensorFlow 2 Pocket Primer|
Author: Oswald Campesato
The key to appreciating this book is in the subtitle "Pocket Primer". It is a small book covering a huge subject and you cannot expect deep mathematics or a full coverage of what TensorFlow can be used for. This is made all the more relevant as TensorFlow is a system that allows you to do maths in general and hence its potential application is open ended. However this said TensoFlow is usually associated with AI and neural networks. It is important to know that this book is about TensorFlow and not the theory of AI. In particular, it doesn't cover convolutional or recurrent neural networks
The first chapter is an introduction to TensorFlow 2 and it contains many observations on how TF2 differs from TF1. If you know nothing about TensorFlow you might find some of these comments difficult to understand. By the end of the chapter you should have a good idea about TensorFlow's data types, its control and major operations. What you will not have is much idea of why these facilities are provided. This is a book that is more useful the more you already know about theory and why you want to do this sort of computation.
The second chapter continues the exploration, but in more detail. Here we learn how to work with tensors. A tensor is a multidimensional array, but if you need to be told this you are probably not going to cope with TensorFlow. Chapter 3 introduces datasets - in general and specific datasets such as CIFAR and MNIST.
Chapter 4 is an odd one as it starts off by dealing with regression, a good old-fashioned statistical technique. I wasn't sorry to see it included, but other readers might find it a little out of place. Later we get to see how to use gradient descent to work out multiple regression.
Chapter 5 is also odd in that it too starts of with a look at classical approaches to classifiers - linear, KNN, Decision trees and even the now somewhat out of fashion SVM. Then activation functions are introduced as a way of allowing linear functions to be "stacked" so that a multi-stage classifier is possible. Then we have a look at the advantages and disadvantages of different activation functions.
At this point the book comes to an unexpected end. There is no discussion of neural networks of any sort. You will find a little relevant material in the appendix, which is more like a not-quite-final chapter. Here you will find information on multilayer networks and even what recurrent networks are, but all at a breakneck speed. This book would be so much better if the appendix was expanded to a full chapter, but then it might take two or three chapters and the book would not qualify as a pocket primer.
As far as it goes this book is good and useful, but it is only a primer. If you buy it with an expectation that it might be more, you could be disappointed.
To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.
|Last Updated ( Tuesday, 28 July 2020 )|