|Geoffrey Hinton Leaves Google To Warn About AI|
|Written by Mike James|
|Monday, 01 May 2023|
Geoffrey Hinton is often referred to as the "godfather of AI" - this is an understatement. He stuck with the fundamentals of neural networks when everyone else had given up. To say that the "father" of neural-network-based AI is worried is an important statement.
We talk of impending AI winters, but Hinton lived through one and kept the faith of the neural network approach going when most others were putting their efforts into symbolic AI - logic, planning, reasoning, expert systems and other similar "engineering" approaches. When he found that the neural networks of the time weren't learning well enough and so were lacking in performance, he attempted to create alternative approaches that learned better. The Boltzman machine that he investigated proved to be too tough a nut to crack, but on returning to neural networks the huge increase in computing power and the amount of data that the Internet made available changed everything:
"We thought it was not working because we didn't have quite the right algorithms, we didn’t have quite the right objective functions. I thought for a long time it was because we were trying to do supervised learning, where you have to label data, and we should have been doing unsupervised learning, where you just learned from the data with no labels. It turned out it was mainly a question of scale."
Basically it was a matter of keeping on with the quest and waiting for the hardware and the data to catch up.
Now we are in an era where AI is proving itself to be working. Perhaps not to the extent that we want it to be, but the field is so much more of a success than it ever was in the dark days of the 80s and 90s.
Hinton's co-workers - Alex Krizhevsky, and Ilya Sutskever - are currently the elite of the AI world. In 2013 Google took over their company DNNresearch Inc and Hinton went to work for Google part time. After a short stint with Google Brain, Sutskever went on to be one of the founding members of OpenAI, the company responsible for GPT and for the "Code Red" at Google, worried that its revenue might be in danger if it didn't create something similar as fast as possible.
Now we have news that Hinton has left Google and in an interview with the New York Times he says that he has done so to comment freely on the dangers of AI. In a tweet he added:
"In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly."
In the interview he says:
"Such fierce competition might be impossible to stop resulting in a world with so much fake imagery and text that nobody will be able to tell what is true anymore."
“The idea that this stuff could actually get smarter than people — a few people believed that.. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
He is reported as saying that the technology should be paused, something that other eminent AI researchers including Yoshua Bengio and Stuart Russel have already requested in a petition than now has over 275,000 signatures.
While Hinton's statement echoes the sentiment of the petition
“I don’t think they should scale this up more until they have understood whether they can control it."
he goes further by drawing a parallel with the Manhatten project and the development of the atomic bomb, quoting Oppenheimer:
“When you see something that is technically sweet, you go ahead and do it.”
and then justifies his position:
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
Is it possible that we have an Oppenheimer moment:
“Now I am become Death, the destroyer of worlds”
Personally I find it sad that after so long being a leader of AI Hinton no longer wants to go in that direction and seems to regret his past work.
Summer SALE Kindle 9.99 Paperback $10 off!!
or email your comment to: firstname.lastname@example.org
|Last Updated ( Monday, 01 May 2023 )|