This week marks the third round of talks at the United Nations in Geneva on the topic of fully autonomous weapons, with Professor Toby Walsh from the University of New South Wales joining the Campaign to Stop Killer Robots discussing the possibility of a pre-emptive ban.
Among the topics for discussion is the necessity of “meaningful human control” over the capacity to select and attack targets.
Image: Usa-Pyon / Shutterstock.com
“Any decision to use force should be made with great care and respect to protect the value of human life and dignity, which only humans are capable of doing,” said Nobel Peace Laureate Ms. Jody Williams of the Nobel Women’s Initiative, a co-founder of the Campaign to Stop Killer Robots.
“Countries should embrace the principle of meaningful human control over targeting and kill decisions and agree to swiftly begin negotiations on a preemptive ban on killer robots.”
Low-cost sensors and advances in artificial intelligence are making it increasingly possible to design weapons systems that would target and attack without further human intervention. If the trend toward ever-greater autonomy continues, the concern is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all.
Several nations with high-tech militaries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom, are moving toward systems that would give greater combat autonomy to machines.
The Campaign to Stop Killer Robots fundamentally objects to permitting machines to take a human life on the battlefield or in policing, border control, and other circumstances. Launched in April 2013, the global coalition of more than 60 non-governmental organisations calls for a preemptive ban on the development, production, and use of fully autonomous weapons systems.
This can be done by creating new international law as well as through domestic legislation.
The matter of “lethal autonomous weapons systems” (another term for fully autonomous weapons) is being considered by countries at 1980 Convention on Conventional Weapons (CCW), a framework treaty that prohibits or restricts certain types of conventional weapons of concern. Its 1995 protocol banning blinding lasers is an example of a weapon being preemptively banned before it was acquired or used.
Many of the CCW’s 122 “high contracting parties” are expected to attend the third meeting on lethal autonomous weapons systems at the UN in Geneva this week, in addition to UN agencies, the International Committee of the Red Cross, and civil society groups coordinated by the Campaign to Stop Killer Robots. Chaired by Ambassador Michael Biontino of Germany, the meeting continues deliberations on the subject held in April 2015 and May 2014.
“Several countries and manufacturers affirm that they have ‘no plans’ to develop lethal autonomous weapons systems. Such pledges are welcome, but insufficient as they’re not a permanent solution to what’s coming if states fail to take action,” said Professor Noel Sharkey of the International Committee for Robot Arms Control.
“Policy commitments not to develop or use these weapons systems may crumble as soon as opponents acquire them. The risks are too high to ignore so the only logical way to avoid that is to legislate the ban.”
Nine countries have endorsed the call for a ban on fully autonomous weapons since 2013: Bolivia, Cuba, Ecuador, Egypt, Ghana, Holy See, Pakistan, State of Palestine, and Zimbabwe.
Many countries have been drawn to the notion of meaningful human control over weapons systems since the inception of the international debate. More than 30 states have specifically addressed the principle or concept of human control in their CCW statements, usually characterising it as meaningful, appropriate, or effective.
Most of these states explicitly support the requirement for meaningful human control and almost all have called for more in-depth discussions on the approach.
“If states will not confirm that there needs to be meaningful human control over weapons then they are deliberately leaving the door open for systems that can kill people without that control,” said Richard Moyes of Article 36, a co-founder of the Campaign to Stop Killer Robots.
“The technology may be complicated, but the solution is simple — start negotiations for an international treaty to make lethal autonomous weapons illegal.”
Article 36 came up with the term “meaningful human control” in a 2013 memo to CCW delegates and Moyes will elaborate on its key elements in his presentation this week on the definitions of lethal autonomous weapons systems.
The agenda for the third CCW meeting is packed with 34 experts presenting over eight sessions on autonomy, definitions, laws of war, human rights and ethics, and security concerns including operational risks. Friends of the chair include diplomatic representatives from Chile, Colombia, Finland, France, Sierra Leone, South Korea, Sri Lanka, and Switzerland. Several countries –Canada, Holy See, Japan, and Switzerland—have provided working papers in advance of the meeting elaborating their views on key issues under discussion.
Countries participating in the Geneva meeting will not take any formal decisions as the aim is to continue to build a common base of knowledge about technical, ethical, legal, operational, security, and other concerns relating to the weapons. However, they “may agree by consensus on recommendations for further work for consideration” by the CCW at its Fifth Review Conference on 16 December 2016.
Civil society experts are playing a leading role in helping to increase understanding of meaningful human control and inform this central aspect of the debate.
Bonnie Docherty will address a side event briefing on Thursday to present her report for Human Rights Watch and Harvard Law School’s International Human Rights Clinic on “Killer Robots and the Concept of Meaningful Human Control.” The report reviews legal precedents for control and finds that meaningful human control over the use of weapons promotes compliance with the principles of international humanitarian law, notably distinction and proportionality, and is also crucial to international human rights law.
Dr. Heather Roff of Arizona State University, who is also a member of ICRAC, has authored a new briefing on “Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons” together with Moyes of Article 36. On Monday afternoon, Roff will address a CCW session to consider “mapping autonomy” as well as a side event briefing.
Australia’s Professor Walsh, who is speaking at a side event briefing tomorrow, helped draft an open letter issued in July 2015 and signed by more than 3,000 artificial intelligence experts that calls for a ban on “offensive autonomous weapons beyond meaningful human control.”
Comments
9 responses to “An Australian Professor Is Talking At The UN About A Ban On Killer Robots”
It all sounds great and all, stop robots, let humans make the choices, so lets count how many meaningful select targets were accounted for at Hiroshima and Nagasaki. An extreme for sure, but what of other countless explosives. Human decisions and choices incur countless unnecessary casualties.
One machine, one terminator, one target, with sufficient precision to only take out the intended target and any hostile resistance would certainly be a better option than “bomb the shit out of country x until we get our target”.
“What if bad guys get robots too?” well, what if they get bombs and nukes? They already have those. Science fiction has made us paranoid about robots and AI.
Its people I’m afraid of.
This is useless. America already ignores UN stuff that it doesn’t like, how will this be different?
Yeah unfortunately for humanity killer robots are going to be too effective for governments not to use.
A government has a problem in another country, morally rightly or wrongly, they send in an army of robots and sort it out, take oil, protect minority whatever. Your general public doesn’t care because family members (soilders) arn’t dying in a foreign grounds. Air strikes will only ever be so effective, you need boots (roboclaws?) on the ground to hold territory.
Even if most countries may say “No” but you cant compel countries to not build them.
Then, if they have killer bots, we should too.
Not to mention whats going to happen with police forces for regular civilians.
But, at least the smart people are talking about it, unfortunately the smartest ones often don’t get to make the decisions.
Good luck, I mean they can just call them drones if a human is behind them. You will never know
It’s all about who to blame when someone blows up a wedding/funeral. You have to be able to blame someone full pulling a trigger, not for programming bad code.
(Apparently. I don’t actually get why. They’re each as bad as the other.)
Actually, I think you’ll find that humans are excellent at rationalizing away the moral complications of anything that they really want to do.
If the professor wishes to argue along those lines, it would seem more logical that robots can preserve the sanctity of life by being programmed to, in a hard-and-fast coding that would hold intact where human morality might fail with sufficient moral flexibility.
Who’s more likely to pull the trigger on a civilian dwelling? A drone officer who has been ordered directly and told that there is ‘sufficient threat’ to justify the collateral damage? Or an AI routine which has the capacity to be programmed not to, no matter how scary the CO might be, or how much it might want to avoid a court martial.
I mean for fuck’s sake, Windows routinely refuses to let me delete files which it declares are ‘in use’. Which of us – the human or the machine – is more likely to stomp their foot and demand, “FUCKING DO IT ANYWAY GOD DAMMIT, COMPUTER!”
If you program robots with a preset list of conditions on when is apparently an ‘appropriate and ethical’ time to violate the sanctity of human life (the way humans do), the robot’s a fuck of a lot more likely to adhere to that programming than a squishy human is.
(Not that this couldn’t be gamed – BY HUMANS – through manipulating the target selection data and falsifying the information that the to-kill-or-not-to-kill algorithm bases its decision-making on. But again: Robots aren’t likely to intentionally falsify that data to get some people killed the way that humans would.)
I think you are getting a step ahead of where a significant problem lies with this to begin with. The question of how you define a conceptually simple thing like what is a human.
Seriously, how do you define what a human is to a computer. Computerphile has a number of very good videos that discuss this exact problem. Things that are simple for a human are quite often incredibly difficult to program.
Even if you do get this right, what then are the parameters of these ‘moral codes’ for it to follow. How do you determine a target to to kill. Is a child automatically banned from being targeted? What happens if the child is strapped full of explosives and about to kill 100 other people?
If you venture into the realm of machine learning you run into the issue that we can’t even begin to understand what the decision process involves. We simply set a task and tell the computer if it is doing it right or wrong. Down this path you get a completely unknown set of algorithms that the computer is following. It may work for each scenario we can think to test but what about the ones we forget, or never imagine to begin with. How do you know that the AI will make the choice you want.
I think the problem is that there isn’t ‘hard-and-fast’ coding to describe the sanctity of life.
While this is true, they appear to be arguing about a point in time where that ability has been reached. Otherwise there’s not exactly much point in arguing about how a decision should be made by a human and not a robot, because currently it can’t make that decision.
They’re trying to argue against creating technology that can do that in the first place, because god knows it’s a lot easier to say, “Don’t manufacture a gun,” than, “Don’t pull the trigger.” Because once it’s there, it’s just waiting to be used. They’re trying to prevent it from being there TO use.
I for one welcome our new robot overlords!
On a side note, what a waste of time. Because the UN of all places is going to be able to stop governments who want this tech from developing them. What are they going to do, send a stern letter.