Can robots have ethics? Do Asimov's laws work?

In his 1942 story “Runaround,” Isaac Asimov defined the Three Laws of Robotics to govern a robot’s behavior. The three laws are meant to give robots a way to safely interact with humans and to allow humans to control robots and use them effectively. The laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Some sort of laws to govern a robots behavior are needed, lest they become dangerous. Examples of what can happen when robots don’t behave appropriately can be seen here and here.

However, are Asimov’s laws dynamic enough to suit all needs?

Without Asimov's laws, Skynet wins.

Without Asimov's laws, Skynet wins.

If you think about it, probably not. To illustrate, let’s pretend that you have a house robot that cleans for you, cooks for you, in general, makes your life easier. Now, when you go to sleep, evil Dr. Burglar breaks into your house and decides he wants to hurt you and steal your things. Well, your robot servant just happens to have a welding torch built into him for plumbing tasks. You command the robot to defend you by using the welding torch on the burglar.

According to the laws, what does the robot do? Let’s examine the points of this situation:

  • Through inaction, the robot will allow you to be harmed. (1st law)
  • Following your order will harm another human. (Breaks 1st law)
  • You gave it an order. (2nd law)
  • Following your order breaks the 1st law. (Breaks 2nd law)

In this situation, what is appropriate? No matter what it does, the robot breaks a law. Could the robot modify its behavior to simply incapacitate the burglar? What if this is not possible?

Let’s expand the argument to a greater scale. What if you are actually a government, the house is your country, the burglar is an enemy invader, and the robot’s welding torch is a military? How does this change the situation? Clearly, these 3 laws aren’t enough to govern all situations.

Though if you think about it, three laws can never really define all behavior, because human morality defines what is right and morality is far more complex than three rules.

Asimov decided that the robot should take whatever action breaks the smallest amount of laws to the smallest degree. This is a step in the right direction, but who decides when one action breaks a law less than another action? The robot would have to be told. But who tells the robot? A human or another robot would have to. But if you follow this chain of thought, you will eventually come to the fact that it is a subjective choice. In effect, the robot takes on the morals of their creators, and follows the laws according to that morality.

To make robots behave more appropriately then, scientists are trying to expand robot morality and improve it.

But at the end of the day, a robot is simply a computer system, so it will come down to programming to define how much ‘morality’ a robot can have. If you program a robot to move an object from Point A to Point B, then it will do just that, regardless of whether the object is a rock, car, or human being. You can add safe guards to lift only rocks and not any humans, but that’s not morality, that’s just error handling.

It is the responsibility of the programmer to incorporate morality into a robot. Since this is such a daunting task though, I doubt that “morality” will become common among robots anytime soon. I expect safe guards and fail safes to improve, but thats different than implementing a moral compass.

About samkerr

I'm an eclectic person. I like to dabble in a multitude of things. I'm sure you'll find my blog reflects that.
This entry was posted in Robots and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *