AI Researchers Call For Ban On Autonomous Weapons
Written by Mike James   
Tuesday, 28 July 2015

AI and robotics have been making headline-grabbing progress recently and high profile people have started to worry in public about the future. Now we have over 1000 signatures on a letter urging a ban on autonomous weapons. 

The letter was presented this week at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina and it has an impressive list of signatories.

The general press is making a big thing of Elon Musk joining forces with Stephen Hawking, but I Programmer readers will probably be more impressed by the fact Yann LeCun, Peter Norvig, Geoffrey Hinton, Yoshua Bengio and even Steve Wozniak are backing the proposal.




Weapons are growing ever more dependent on clever technology. However the petitioners are worried about the distinction between semi-autonomous weapons that still need a human to make the final "kill" decision and completely autonomous weapons that take the final step on their own and it is the latter group they are worried about.

"Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."


One of the problems is knowing exactly when a system fits into the category of "autonomous". As targeting systems become more advanced AI creeps into the them and there are already missiles that refine the target as they get closer. The arguments for and against such weapons are more subtle than they appear at first as handing over control is a continuum. 

Another worry is that we are about to enter a new arms race:

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."


Drafting a law that bans AI weapons is not going to be easy.

"In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."


You may have your doubts about the warnings of imminent "terminator" scenarios that have been in the headlines recently. We are still very far from creating an AI system that is anywhere near conscious or self aware. However, we already have the capabilities to put sufficiently good computer vision systems into weapons so that they can identify their targets. Whether you regard this as a threat or not, it is something that is worth talking about because it is about to happen - or has already happened. 

If you would like to sign the letter you can.




More Information

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Related Articles

Halting Problem Used To Prove A Robot Cannot Computably Kill A Human    



To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, FacebookGoogle+ or Linkedin,  or sign up for our weekly newsletter.



Pulumi Announces Copilot AI Management

Pulumi has announced Pulumi Copilot, an AI tool for general cloud infrastructure management. Copilot uses large language models with semantic understanding of the cloud to provide insights and control [ ... ]

Udacity's iOS Nanodegree Completely Revamped

Udacity has refreshed its iOS Developer Nanodegree Program. The latest version, iOS Development with SwiftUI and SwiftData is intended to take 5 months.

More News


kotlin book



or email your comment to:

Last Updated ( Tuesday, 28 July 2015 )