Pain And Panic Over Rogue AI - We Need a Balance
Written by Sue Gee   
Wednesday, 31 May 2023

Jumping on the bandwagon of hysteria created by Geoffrey Hinton's recent warning about the existential threat posed to humanity by AI, a "Statement on AI Risk" is being endorsed by researchers and academics in the field of AI.

Its single sentence reads:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

This statement comes from the Center for AI Safety (CAIS) a "research and field-building" non-profit founded in 2016 with the explicit purpose of ensuring the safe development and deployment of AI.

AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.

So the sentiment expressed in the open letter fits perfectly with the CAIS raison d'etre and on the face if it is a very reasonable statement, but one way and another it does seem to be rabble-rousing. Indeed it is explicitly designed to to draw attention to the growing number of experts and public figures who consider that AI poses "severe risk" and being able to put Geoffrey Hinton's name first on the list of signatories takes advantage of the recent media coverage of his concern that AI systems could eventually become more intelligent than humans and take over the planet.

Now it is the turn of Yoshua Bengio, the second signatory listed and co-recipient with Geoffrey Hinton and Yann LeCun of the 2018 Turing Award for his work is developing deep learning methods, who is in the media limelight. Reports of an interview conducted by the BBC suggest that Bengio "feels lost over his life's work". This, however, is a misconstruction of his actual words. In the interview he does express his concern about "bad actors" getting hold of AI saying:

"It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous.

But having said:

"It is challenging, emotionally speaking, for people who are inside [the AI sector]"

his next words are:

"You could say I feel lost, but you have to keep going and you have to engage, discuss, encourage others to think with you."

and I'm inclined to consider the words after the "but" as contradicting the idea of feeling lost.

CAIS letter

The CAIS petition follows in the wake of the open letter from the Future of Life Institute:

calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Since we last reported on it, see Gauging Sentiment Towards AI Models, its number of signatories has now surpassed thirty thousand but one individual who has resisted adding his name to it is Yann LeCun, even though he had signed the first Future of Life Institute petition about the use of AI in the context of autonomous weaponry, see AI Researchers Call For Ban On Autonomous Weapons. Judging from his Tweets, LeCun's response to the current wave of hysteria is that AI researchers have the ability to design AI systems that are safe and that the current AI systems are nowhere near having type of ability and influence that are being attributed to them.

A balanced view about future AI systems can also be found in the most recent blog post by OpenAI founders Sam Altman, Greg Brockman and Ilya Sutskever on the topic of Governance of superintelligence. They take as the starting point:

it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations

and go on to start a discussion of how to manage the risk that this could pose.

At the conclusion of the blog post they reiterate their belief in the idea that AI will make the world a better place with economic growth and an improved quality of life and that as it would be risky and difficult to stop the creation of superintelligence it is imperative to get it right. 

More Information

Statement on AI Risk

Centre for AI Safety

Pause Giant AI Experiments: An Open Letter  

Related Articles

Hinton Explains His New Fear of AI

Geoffrey Hinton Leaves Google To Warn About AI

Hinton, LeCun and Bengio Receive 2018 Turing Award

Runaway Success Of ChatGPT

Chat GPT 4 - Still Not Telling The Whole Truth

AI Researchers Call For Ban On Autonomous Weapons

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Eclipse JKube 1.16 Goes GA
08/04/2024

Eclipse JKube makes deploying your Java application to a Kubernetes cluster a breeze. Let's find out what's new.



NVIDIA Releases Free Courses On AI
19/04/2024

NVIDIA has jumped on the AI bandwagon in a big way. Hardware aside, this means working on training material too. Several self- paced courses have been released and for free too!


More News

raspberry pi books

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 31 May 2023 )