|Machine Learning Pioneer Vladimir Vapnik Joins Facebook|
|Friday, 28 November 2014|
Facebook's AI Research (FAIR) has announced that it has hired Vladimir Vapnik - inventor of the Support Vector Machine and statistical learning theory.
It probably isn't fair to characterise Vladimir Vapnik as a theoretician, but it you read his book The Nature of Statistical Learning Theory then you will probably think that he is at least on the more mathematical side of machine learning. Together with Alexy Chervonenkis, he constructed a theory of learning, generally referred to as Vapnik-Chervonenkis or VC theory, that introduced ideas that guided the invention of the Support Vector Machine SVM.
To give you some idea what VC theory is about, consider the idea of the VC dimension of a classifier f that depends on a set of parameters. The classifier is said to "shatter" a set of data points if for all assignments of classification labels the parameters can be adjusted so that f makes no errors i.e. it correctly divides up the space into two groups. The VC dimension of the classifier is the size of the largest set of points that has an arrangement it can shatter. Beyond the VC dimension the classifier can always be beaten by a specific labeling of the points.
You can see that VC dimension gives you a measure of how flexible the classifier is. For example, a straight line can shatter any three points but not four and hence its VC dimension is 3. The more "wiggles" a classifier boundary can have, the higher its VC dimension.
Consideration of the generalizability of learning led Vapnik to apply VC theory to linear classifiers and to invent the Support Vector Machine, the basic idea of which is to find a dividing hyperplane that separates two groups with a "maximum margin". This is the plane that separates them and is as far as possible from members of both groups. Such a decision boundary has the maximum size of "no man's land" around it and so should be better at classifying new data.
SVMs used to be the favored technique for machine learning, but recently neural networks have taken center stage. In many ways you could sum up the current state of machine learning as SVMs versus neural networks.
It might therefore come as a surprise that Vapnik is joining some long time collaborators - Jason Weston, Ronan Collobert and Yann LeCun -who are all well known for work on neural networks. In fact, back in 1995 LeCun and Vapnik had a bet. Vapnik thought that by the year 2000 neural networks would be out of fashion and not used for anything serious. He lost the bet but this doesn't mean that SVM and similar non-neural network approaches are out for the count. It seems very likely that "big data" versions of the SVM will perform as well as deep networks.
Vladimir Vapnik with Yann LeCun
What is Vapnik going to do at FAIR?
The announcement states:
"He is working on a new book and will be collaborating with FAIR (Fundamentals of Artificial Intelligence Research) research scientists to develop some of his new ideas on conditional density estimation, learning with privileged information and other topics."
It is interesting, or should that be worrying, how many "top" AI researchers are now working for Internet companies - Google, Facebook, Microsoft and Baidu and more. There has been a long history of industry research labs - Bell, HP, Xerox Parc and so on - but somehow this seems different. It might just be the goals of organizing information using AI are more threatening than the aims of engineering research of the past. Or it might be that there is a distinct lack of trust that any of them will use the techniques to make the world a better place.
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
or email your comment to: email@example.com
|Last Updated ( Friday, 28 November 2014 )|