Your Voice Your Vote 2024

Live results
Last Updated: April 23, 10:42:16PM ET

Are These 'Killer Robots' or a Smart Move?

Human Rights Watch opposes the use of autonomous weapons

ByABC News
November 20, 2012, 4:32 PM

Nov. 22, 2012— -- A new report by Human Rights Watch warns against the possible use of "killer robots," autonomous weapons that would work without human involvement. Such weapons are not currently in use, but the report suggests that they could be developed within the next 20 to 30 years.

According to the report, entitled "Losing Humanity: The Case Against Killer Robots," military officials say that humans will retain some control over decisions to use such weapons lethally, but "their statements often leave open the possibility that robots could one day have the ability to make such choices on their own."

Human Rights Watch is most concerned with the protection of civilians during wars. The organization worries that the technology used to create autonomous weapons might not comply with international humanitarian law and that it would "also undermine essential non-legal safeguards for civilians."

Among the larger issues cited is the inability of a robot to feel emotion and problems with accountability should something go wrong. While the idea of such a future may not seem imminent, machines like drones suggest it's not out of the question. Below, we took a look at what makes a robot a "killer robot," how it works, and the reasoning for and against such weapons.

What Is a So-Called 'Killer Robot'?

The report divides robotic weapons into three categories:

1. Human-in-the-loop weapons: These weapons require human command to select targets and deliver force.

2. Human-on-the-loop weapons: These weapons select targets and deliver force with oversight from a human who can override their actions.

3. Human-out-of-the-loop weapons: These weapons select targets and deliver force without any human help or interaction.

How Would They Work?

An autonomous weapon would, according to the report, "sense an incoming munition, such as a missile or rocket, and…respond automatically to neutralize the threat. Human involvement, when it exists at all, is limited to accepting or overriding the computer's plan of action in a matter of seconds."

All lethal weapons currently in use still involve some human interaction or decision-making, including drones. "Often piloted from halfway around the globe, these robotic aerial vehicles provide surveillance and identify targets before a human decides to pull the trigger, commanding the drone to deliver lethal force."

Another example of this is the Counter Rocket, Artillery, and Mortar System (C-RAM), which the U.S. deployed in Iraq in 2005. According to the report, the system helped intercept rockets and other weapons, and provided warnings to troops. The C-RAM detects a threat and then "a human operator certif[ies] the target," reads the report. (BUT it still involves a human.)

Other countries have weapons defense systems with similar capabilities. Israel has deployed an Iron Dome that uses radar to identify threats. It is armed with response missiles that can be deployed with operator approval. South Korea has also developed and started using sentry robots that have autonomous surveillance capabilities, but they rely on human command to fire.

According to a spokesman for the Department of Defense, weapons are not currently capable of making "suggestions" about targets.

"Weapons used in current unmanned systems, like the Predator, are entirely controlled by a human operator, albeit remotely," wrote U.S. Army Lt. Col. James Gregory in an email. "It would be incorrect to say that those types of weapon systems make 'suggestions' about targets. They do not have this capability."