|Hinton Explains His New Fear of AI
|Written by Mike James
|Friday, 05 May 2023
This week Geoffrey Hinton resigned from Google in order to speak freely and warn us about the existential threat posed to humanity by AI. In this video he explains what has changed his mind after an entire career working towards artificial intelligence.
HInton's sudden departure from Google on May 1st, see Geoffrey Hinton Leaves Google To Warn About AI has been a bigger talking point than we could have imagined. It made headline news around the world and continues to dominate the media. To that extent he has been highly successful in wanting to draw attention to his new belief that digital intelligence could overtake biological intelligence and that the consequences spell doom for the human race.
This interview comes from the MITEmTech Digital Conference chaired by Will Douglas Heaven and was conducted over video link to Hinton's home in London. It was posted to You Tube by Joseph Raczynski with this description:
Geoffrey Hinton essentially tells the audience that the end of humanity is close. AI has become that significant. This is the godfather of AI stating this and sounding an alarm. His conclusion: "Humanity is just a passing phase for evolutionary intelligence."
Asked at the beginning of the interview about his sudden resignation Hinton explains that at 75 he's not as good at technical work as he used to be and that it was time to retire. His second reason was that very recently he had changed his mind a lot about the relationship between the brain and the kind of digital intelligence we're developing, Whereas he used to think that the computer models weren't as good as the brain and the aim was to improve the models to understand more about the brain. Now the performance of models such as GPT-4 have made him think that the computer models work in a different way to the brain - they are using back propagation while the brain is probably not.
In response to a question from Heaven Hinton explains in some detail how back propagation works. Heaven next asks what it is about large language models that has stunned Hinton and flipped his ideas about back propagation and machine learning in general. The answer is about scale. Hinton points out that GPT-4 possesses much more knowledge than we have despite only having a trillion connections whereas a human brain has 100 trillion which means that the computer models are much better at gaining knowledge than humans. So back propagation might be a much much better at learning than we are and this is what Hinton finds scary.
Expanding on this Hinton explains that the problem lies in digital models running on thousands of computers that communicate with each other which means the models instantly have access to all that data, sharing knowledge in a way that is impossible of humans.
Hinton finds the way that ChatGTP is already capable of common sense reasoning scary because once it has acquired more learning, by for example reading works of literature it might be able to manipulate humans into doing bad things.
Asked by Heaven if we would be safe if there were no bad actors Hinton concedes we might be safer - but also points out that the political system is already so bad that we can't decide not to give assault rifles to teenage boys and expresses the root of his fear as:
smart things can outsmart us
He also explains how evolution has provided humans with checks and balances which control the goals we make for ourselves. His fear is that lacking such controls digital intelligence, which he sees as distinct from human intelligence, might set goals that conflict with ours that that non-humans will prevail.
In the final section of the session Hinton answers questions from delegates at the conference.
or email your comment to: email@example.com
|Last Updated ( Friday, 05 May 2023 )