Robot laws

from Wikipedia, the free encyclopedia
Isaac Asimov 1965

The robot laws (English Three Laws of Robotics ) were first described by Isaac Asimov in his short story Runaround ( Astounding , 1942) as the "basic rules of robot service".

General

Asimov's laws are:

  1. A robot must not (knowingly) injure a human being or, through inaction, (knowingly) allow a human being to be harmed.
  2. A robot must obey commands given to it by a human being - unless such a command would conflict with rule one.
  3. A robot has to protect its existence as long as that protection doesn't clash with rule one or two.

Note that the laws are hierarchical. They form the background of the science fiction stories collected in Ich, der Robot (1950) and have since shaped the perception of what and how a robot should be. The robots described by Asimov are bound by these laws in their behavior and their decisions.

Civil use

Concrete steps for programming industrial and household robots cannot be derived from this today - various safety rules apply here ( machine directive ).

Military use

Robots in the military sector (automatic weapon systems, smart bombs , drones , combat robots ) do not follow these laws. See also safety of robotic systems .

Law zero

In the novels The Awakening to the Stars and The Galactic Empire (OT: Robots and Empire ), the “Zero Law” was derived, and the other three laws were modified accordingly:

  1. A robot must not injure humanity or, through passivity, allow humanity to be harmed.
  2. A robot must not injure people or cause them to be harmed by inaction, unless by doing so it violates the zeroth law.
  3. A robot must obey the commands of humans - unless such commands contradict the zeroth or first law.
  4. A robot must protect its own existence as long as its actions do not contradict the zeroth, first or second law.

In the first version of Asimov's robot laws, the individual human being was still the highest authority to be protected. In the new four robot laws, humanity is above the individual human being. Here, in particular, the consequence of the hierarchical structure of the laws becomes clear, since a robot must prefer the integrity of a mass of people to that of an individual. The introduction of the zeroth law is therefore to be viewed as critical, as it gives robots the opportunity to intentionally injure or even kill individual people in order to protect humanity. This raises the fundamental question of how far robots should be allowed to harm people.

safety

Even if the laws seem clear, they are not "foolproof", especially because people consider them humanly and therefore incompletely. In the film I, Robot , for example, you can see that the three laws do not preclude robots from taking over power to protect humanity from themselves. This happens because the very highly developed robot brain VIKI derives the zeroth law from the original three laws, which after some almost philosophical reflection can actually result from the three basic laws.

The robot laws, if not considered in their entirety, lead to paradoxes that have been portrayed in various literary works and in films.

Considered as a legal norm , the dogmatic interpretation of the robot laws shows that their implantation in appropriate units leads to irreconcilable contradictions.

New laws

In the trilogy about the robot Caliban (the concept for the novels about Caliban comes from Asimov himself, but was implemented by Roger MacBride Allen ) the three laws are completely discarded and the "new laws" are created:

  • A robot must not injure a person.
  • A robot is required to work with humans unless that collaboration is contrary to the First Law.
  • A robot must protect its own existence as long as it doesn't conflict with the First Law.
  • A robot is free to do what it wants, unless doing so would violate the First, Second, or Third Law.

Caliban itself is the prototype of a robot that is not bound by any law.

Trivia

Contains an allusion to the Laws of Robotics by Asimov the robots.txt of last.fm these lines:

"Disallow: / harming / humans
Disallow: / ignoring / human / orders
Disallow: / harm / to / self"

In the third episode of the Raumpatrouille television series , the robots become the “guardians of the law” (first broadcast on October 15, 1966). They take control of a mine to prevent people from harming themselves after an argument (see Law 1). This is a further development of the plot on which Asimov's story Vernunft is based as early as 1941 .

See also

Web links

Wiktionary: Robot law  - explanations of meanings, word origins, synonyms, translations

Individual evidence

  1. badische-zeitung.de , Computer & Medien , February 18, 2017, Eva Wolfangel: How far do people deliver themselves to computers? , February 19, 2017
  2. Isaac Asimov, My Friends, the Robots . Heyne, Munich 1982, p. 67
  3. a b Isaac Asimov: The Naked Sun . Doubleday, New York January 1975. (Eng.)
  4. Gunter Laßmann, Asimov Robot Laws. What are they really doing? Heise Medien, Hannover 2017, ISBN 978-3-95788-089-5 (epub)
  5. Robots.txt of Last.fm . December 6, 2010