Teaching Robots To Behave Ethically

While other researchers are busy teaching robots how to lie, professor Susan Anderson and her husband Michael have taught a robot how to behave ethically. I know which team is getting my research dollar.

Susan Anderson is a philosopher. Her husband Michael is a computer scientist. By their powers combined, they've advanced the young field of machine ethics considerably, all in the name of making robots treat us like human beings.

"There are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots," says Susan, professor emerita of philosophy in the College of Liberal Arts and Sciences, who taught at UConn's Stamford campus. "Don't we want to make sure they behave ethically?"

Machine ethics combines ethical theory with artificial intelligence in order to help give electronic lifeforms a sense of ethics, and while the jury is still out as to whose ethics should be instilled in robots, I'm glad someone is looking into it.

The couple based their work on the prima facie duty approach to ethics, introduced by Scottish philosopher David Ross in 1930. This approach has a person weighing their actions against a set of obligations, such as doing no harm, promoting health and safety, and being courteous. It's a complicated method for human beings to use, but it's perfect for machines.

Indeed it's perfect for robots, specifically the ones assigned to assist with a set group of tasks, like making sure a patient takes their medication, as seen in the video above. The set of obligations for that specific situation are programmed in, and the robot knows how to correctly respond.

"Machines would effectively learn the ethically relevant features, prima facie duties, and ultimately the decision principles that should govern their behaviour in those domains," says Susan.

The trickiest part of teaching robots ethics is that it's difficult for many humans to grasp the concept themselves. Perhaps one day humans will be taking ethical cues from machines.

The ethical robot [Physorg.com]


Comments

    SO how long before I get my Robot best friend promised to me as a kid by beyond 2000??

    Its like 2010
    Wheres my robot best friend already??
    Wheres my driverless car??
    wheres my Virtual reality home console??

    Maybe I should sue the ABC for being liars

      Multiple wars, Global financial crisis, Global warming.. which equals many people dying, out of work and worrying about things other than technology for entertainment.

    Having a robot perform an "ethical" task is not the same as having a robot understand why it is doing it. there is a massive difference. Is my antivirus software behaving ethically because it protects my PC? Its arguable.

    Factoring ethics into decision making is what makes humans different to AI (currently). From a pure logical standpoint, it is an inefficient step. True AI may choose to bypass any rules of ethics in favour of its own efficiency.

    Our ethics, especially in relation to other humans, are based primarily on instincts and communal values. True AI would transcend the need for such things.

    1-A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2-A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Sure the problems interpreting these laws was a major theme in Asimov's robot series, but I think they still server as an excellent blueprint of what an ethical serving robot should be.

      Good man that.

      This thread alone has me itching to dig out Asimovs Robot Collection.

      Humans are prone to waging war. Robots would intervene to prevent casualties, making every human city a police state to protect them against possible conflict.
      (With low level violence permitted against those who resist and try to destroy the robots.)

      Asimov's books are great, but even HE proved the Three Laws are flawed.

Join the discussion!