|Hinton, LeCun and Bengio Receive 2018 Turing Award|
|Written by Mike James|
|Wednesday, 27 March 2019|
Often said to be the Nobel Prize of computing, the Turing Award goes this time round to three pioneers of neural networks. The citation refers to the three as "Fathers of the Deep Learning Revolution" but I see one father and and two offspring.
Established in 1966 this award and named for Alan M. Turing, this award is the most prestigious of those made by the ACM (Association for Computing Machinery) and is now worth $1 million dollars, shared between the awardees.
It is presented annually to recognize the contributions of computer scientists and engineers who have:
created the systems and underlying theoretical foundations that have propelled the information technology industry.
There is no question that this year's recipients have pushed forward the boundaries of deep learning. They are responsible for much of the successful AI we are starting to take for granted.
"Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications."
All true and LeCun and Bengio have certainly done some groundbreaking work, but Geoffrey Hinton is arguably the only real father that this area of computing can look to. He was applying the back propagation method back in the early 1980s when just about everyone else believed that the whole idea of neural networks was a dead end and a waste of time.
Yan LeCun was a postdoctoral student of Hinton's in the late 1980s and is best known for creating convolutional neural networks. Yoshua Bengio has worked with Hinton on deep learning papers, but he is more of an academic, publishing typically 20 papers a year. Arguably his most important work long term is likely to be on biologically inspired learning methods. The volume of Bengio's output has to be kept in mind when you interpret the figure of 131 citations a day, the most of any computer scientist, closely followed by Hinton's 127 and LeCun's 62.
When you look at Hinton's work you get a very different picture. He has focused on "big ideas" at every turn, staying true to multilayered neural networks when most were of the opinion that they could never work, as indicated by their poor performance. He then switched to working on Boltzman machines, which are still a theoretical headache today. To make progress, he simplified them to create the Restricted Boltzman machine which led to the auto-encoder and many pre-training methods. Eventually computer power caught up with the demands of deep networks and, with some care, it was possible to train much deeper networks only to discover that this was the solution he had been looking for all along. Since then he has been involved in constructing other more sophisticated architectures based on neural networks - capsule networks - which may in the long run prove to be the most important of all.
While I am of the opinion that Yann LeCun and Yoshua Bengio have made huge contributions to the subject, Geoffrey Hinton has been working at it for longer and has instigated and shared many of big ideas. When you make a list of AI researchers that has Yann LeCun or Yoshua Bengio on it, there are many other candidates to be included, but if the list starts with Geoffrey Hinton you have entered a different league.
ACM will present the 2018 A.M. Turing Award at its annual Awards Banquet on June 15 in San Francisco, California.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Wednesday, 27 March 2019 )|