Can we delegate moral decisions to robots?
Connecting state and local government leaders
The Office of Naval Research has undertaken a project to field robots that can make simple moral decisions, such as whether to help a wounded solider instead of completing an assigned task on the battlefield.
An Army robot is tasked with transporting needed medicine to a field hospital. Along the way, it encounters a wounded soldier. Should the robot drop its cargo and carry the wounded soldier instead?
That’s one of the scenarios posed by Matthias Scheutz, principal investigator of a project funded by the Defense Department’s Office of Naval Research aimed at developing robots that can make moral decisions.
As the military fields robots with increasing degrees of autonomy, such scenarios are more likely. As a result, it would be, well … unethical, of robot designers not to consider in advance what the machines should do in such circumstances.
According to Scheutz, the team – with researchers from Tufts University, Brown University and Rensselaer Polytechnic Institute – is developing algorithms to enable robots to weigh a variety of factors.
In principle, there’s not a high technical bar to overcome. It’s simply a matter of building enough sophistication into the logic and anticipating as many different conditions that might be encountered in the world. From there, the robot has enough data to produce decisions that humans ordinarily would consider the right ones. And that will require a lot of trial and error, just as it does with humans.
Of course, the first hurdle for researchers to clear is the lack of agreement among humans about the proper ethical and moral response to a given action. All these factors raise a number of questions about attempting to emulate morals in machines.
At that point I also decided to ask Scheutz, professor of cognitive and computer science at Tufts University, some of those same questions.
GCN: In humans, ethical systems and, especially, behaviors seem to be influenced by unconscious factors, past traumas, etc. Would robots with embedded moral or ethical logical systems have variation from one robot to another?
Scheutz: They could.
GCN: Who will decide on the ethical rules for robots?
Scheutz: This is not our decision, but the decision of those who will deploy the robots, but I expect all ethical rules to conform with national and international laws (e.g., international humanitarian law).
GCN: Humans often modify their ethics as a context changes and interactions with other humans exert influence. What about robots? Are you planning for an interactive element?
Scheutz: The robots will have different levels of ethical reasoning, some more involved and sophisticated than others. They will also be able to simulate or follow human ethical reasoning (to some extent) to be able to understand why some humans might arrive at a particular conclusion that's possibly different from what the robot inferred.
In that sense, the robots will be able to work with different ethical systems, even though their actions will always be guided by the same system.
GCN: Can you give any details about the logic that will be used? How are factors in decision making given weights? Would the robot learn from actual outcomes? Or would human panels judge outcomes and call for changes in logic?
Scheutz: We are working on ways for the robot to be able to justify its decisions to humans. It is critical that humans be able to understand why the robot did what it did, not just because it determined that a particular course of action had the highest utility.
GCN: Humans often deceive themselves, making decisions under stress that they wouldn't have made sitting around a table with other humans. Might we expect robots, immune to stress, to make "braver" decisions?
Scheutz: The robots' decisions will not be subject to factors known to modulate decisions in humans, including stress and negative emotions, etc.
Scheutz earned a Ph.D. in philosophy at the University of Vienna as well as a Ph.D. in cognitive and computer science at Indiana University. “I have always been fascinated by the question of what a mind is and how it is possible that some physical systems (e.g., humans) can have minds,” he said.
“I think computer science and philosophy are complementary in many ways. Philosophy provides a framework to talk about minds and mental states while computer science provides the tools for implementing such frameworks and thus understanding them at a mechanistic level.”