|Deep Learning Researchers To Work For Google|
|Written by Mike James|
|Thursday, 14 March 2013|
A neural networks start-up comprising Geoffrey Hinton and two of his research students, Alex Krizhevsky and Ilya Sutskever, has been acquired by Google to help computers understand human meaning.
University of Toronto Professor of Computer Science Geoffrey Hinton and his DNNresearch team work in the area of “deep learning” networks.
This is research is of critical importance to Google which has taken the unusual step of putting Hinton on its payroll while allowing him to divide his time between his university research in Toronto and his work at Google headquarters in Mountain View. Google has also agreed to help fund DNNresearch to the tune of $600,000 to support further work in neural networks.
A note posted by Hinton on his Google+ page explains his motivation for the move:
Last summer, I spent several months working with Google’s Knowledge team in Mountain View, working with Jeff Dean and an incredible group of scientists and engineers who have a real shot at making spectacular progress in machine learning. Together with two of my recent graduate students, Ilya Sutskever and Alex Krizhevsky (who won the 2012 ImageNet competition), I am betting on Google’s team to be the epicenter of future breakthroughs. That means we’ll soon be joining Google to work with some of the smartest engineering minds to tackle some of the biggest challenges in computer science.
Anybody who enrolled last fall in Coursera's Neural Networks for Machine Learning course, which was taught by Geoffrey Hinton can't help but be impressed at the rate of progress being made in this area and the fact that Hinton and his team are at the forefront of this research.
Hinton has already collaborated with Google into deep learning, see Google's Deep Learning - Speech Recognition, and it is obvious why it should want to "poach" the talents of Hinton et al in this way. A Google Tech Talk by Hinton, dated 2010 and included in our article The Triumph Of Deep Learning, gives some idea of what what going on at a technical level - but things have already moved on a great deal since then!
While it seems (and is) deeply mathematical, the problems Hinton's area of research sets out to tackle aren't difficult to appreciate.The problem that Google wants to apply the techniques to are directly related to search and contextual meaning.
Take the sentence ‘I saw the Grand Canyon flying to Chicago,’ you know that doesn’t mean the Grand Canyon was flying to Chicago, because you know what kind of thing the Grand Canyon is and can use contextual clues. However a computer doesn't possess this type of knowledge and cannot parse this type of sentence.
According to Google Fellow Jeff Dean, Hinton’s work has applications for voice- and image-based searches. As more and more users send search queries by snapping a picture from or speaking to their smartphones, Google has spent more research dollars trying to figure out ways to automatically derive contextual clues from images and sound.
Such searches are more difficult to parse for a number of reasons. First, the computer must figure out what the person is saying, or what a picture actually represents. Computer-science researchers have been working on areas such as voice recognition for decades, but the computational power to store and learn from massive amounts of audio data is a fairly recent development. After a computer determines what the person is asking for, it must then apply the kind of contextual clues that allow it to return a relevant result.
The combination of Hinton's expertise and Google's data should lead to some interesting results. Watch this space!
Press Release: U of T neural networks start-up acquired by Google
Google's Deep Learning - Speech Recognition
Speech Recognition Breakthrough
Google Helps Tell An Apple From Apple
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Friday, 24 October 2014 )|