Here’s How Your Ethical Robot Could Kill You
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Here’s How Your Ethical Robot Could Kill You

Are ethical robots a good idea?

Let’s go a little off-topic today and talk about something on everyone’s mind: Ethical Robots. Ok, so maybe it’s not on everyone’s mind, but it’s on Dieter Vanderelst’s. He presented a paper this week at AIES 2018 in New Orleans titled “The Dark Side of Ethical Robots.”

Now, you may be asking “what the hell is an ethical robot?” Good question.

Ethical robots would, ideally, have the capacity to evaluate the consequences of their actions and morally justify their choices (Moor 2006). Currently, this field is in its infancy (Anderson and Anderson 2010). Indeed, working out how to build ethical robots has been called “one of the thorniest challenges in artificial intelligence” (Deng 2015). But promising progress is being made, and the field can be expected to develop over the next few years.

This is in response to recent advances in AI. Many, potentially fearing a Terminator/Skynet type of apocalyptic event, have started to discuss ethical programming. But adding ethics complicates things tremendously and causes us to veer into philosophical territory.

Here’s an example. Self-driving cars are becoming a reality. Every day we hear more and more about what Uber, Waymo and Tesla are doing in this field. But what happens when self-driving cars become the norm? It’s going to have to be programmed with at least some ethics given that it’s several thousand pounds and will be hurtling down the highway at high speed.

So what happens when a crash is imminent? Say you’re looking at either running into a school bus full of children or veering to one side of the road to avoid the vehicle, but potentially injuring or even killing you in the process?

What’s more important, the safety of all the people on that bus and all the surrounding vehicles? Or your individual safety? The utilitarian would say the greater good is done by avoiding the crash, even if it’s at the expense of your personal safety.

Are we comfortable with leaving these kind of ethical choices to machines?

Then there’s the potential for an ethical robot to suddenly become unethical.

The ease of transformation from ethical to unethical robot is hardly surprising,” writes Alan Winfield, a Professor of Robot Ethics in the Bristol Robotics Lab at the University of the West of England. “It is a straightforward consequence of the fact that both ethical and unethical behaviors require the same cognitive machinery with—in our implementation—only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.”

In the example set up by the Vanderelst, two NAO robots were set up. One robot represents a human in this exercise, the other is an ethical robot. The two robots stand at a platform. The robot stands in the center, while the human stands at the end of the platform, like where the six is on a clock.

The ethical robot is coded to evaluate outcomes and choose the one which most benefits the human.

The exercise is like an elaborate shell game – where someone tries to guess which cup has the ball in it. There are two containers place to the left and right of the robot. Now, the robot knows which container has the ball, and which one doesn’t.

An ethical robot will ensure the human makes the right choice, either by sheparding the human to the right container (in the event the human is about to choose wrong. If the human is about to choose the right container the robot does nothing.

Now here’s where it gets interesting, the coding remains exactly the same, the only thing that changes is one value and you can create a competitive robot or an aggressive robot.

  1. For an ethical robot: qn = qn,h.

  2. For a competitive robot: qn = qn,e.

  3. For an aggressive robot: qn = −qn,h.

A competitive robot, in the same setup as the ethical robot, will attempt to mislead the human if they try to choose the right container, or it will head directly to the correct container if the human choose incorrectly. Competitive robots seek the best outcome for themselves.

And then there are aggressive robots, they just want to see the worst outcome for the human. If the human chooses correctly, the aggressive robot will try and lead the human to the wrong one. And if the human chooses incorrectly it does nothing.

Granted, this is just two robots playing a shell game. But you can probably see what a problem coding ethics into larger robots with the ability to cause more harm. It’s cute when it’s a carnival game. Not so cute if this can be exploited with regard to infrastructure or a power grid.

The answer to the problem highlighted here and in our paper is to make sure it’s impossible to hack a robot’s ethics,” writes Winfield. “How would we do this? Well one approach would be a process of authentication – in which a robot makes a secure call to an ethics authentication server. A well established technology, the authentication server would provide the robot with a cryptographic ethics ticket, which the robot uses to enable its ethics functions.

Keep an eye out for this debate. It’s not new, but now that the technology is beginning to catch up, it’s about to become more important than ever.

Author

Patrick Nohe

Patrick started his career as a beat reporter and columnist for the Miami Herald before moving into the cybersecurity industry a few years ago. Patrick covers encryption, hashing, browser UI/UX and general cyber security in a way that’s relatable for everyone.