Robots will soon be weaving tangled webs, as researchers at the Georgia Institute of Technology grant robots the power to deceive, which sounds like a perfectly good idea to me.
People tend to get angry when I casually mention the theoretical robot apocalypse, so I generally avoid the subject. Realistically, robots are simply machines limited to perform the tasks assigned to them by humans. They can’t think for themselves. Any intelligent thought by a robot is merely a result of programming.
And now we’re programming them to be deceptive, so even if they do develop independent thought, they won’t tell us. It’s not in our best interests to know.
“We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,” said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.
The experiment used to train a robot to deceive involved two robots; one black and one red. The black robot was programmed to recognise situations where deception was warranted and then perform said deception.
Arkin and research engineer Alan Wagner used interdependence theory and game theory to create algorithms the robot could use to weigh the value of using deception in a given situation.
The test itself involved a series of three hiding places, the paths to which were lined with coloured markers. The black robot was the hider, and the red the seeker. Using deception, the black robot would knock down markers leading to one hiding place and then hide in another spot, avoiding knocking down the markers indicating it. The red robot, fooled by this chicanery, would be programmed to shake its tiny robot arm and say, “Aw, you got me.”
The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behaviour in a robot.”
What good are deceptive robots? On the battlefield they’d be able to outmanoeuvre foes by providing false clues as to their whereabouts. In rescue operations, a robot might use deception to help calm people awaiting rescue. In the robot apocalypse, they could use deception to lead humans into traps.
Oops, sorry about that.
Still, the researchers are considering the ethical ramifications that teaching robots to lie and cheat could have on civilisation.
“We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects,” explained Arkin. “We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.”
Since this is unfolding right in my backyard, I’ll be sure to let you folks know if any robots come knocking on my door trying to offer me lollies.