The Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In I, Robot, Isaac Asimov set out the three laws of robotics, and then set out to find loopholes in the laws. What happens, for instance, with a robot who can read minds, who feels it necessary to tell each person who asks questions of it the answers that that person most wants to hear? Wouldn’t giving any other answer, even a true answer, be hurtful (and therefore in violation of the first law)? What about a bright, arrogant robot who is told by a pissed off worker to “Get lost”—isn’t the robot obligated to remain hidden? What about robots asked to solve problems that could potentially bring harm to humans? Should the positronic brain give the answer, or refuse to consider the question? What if (he) has been programmed with a sense of humor?

The thread that holds the short stories in I, Robot together is the intervention of Susan Calvin, robo-psychologist. Calvin is brought in when the robots suffer mental breakdowns, refuse to cooperate (!) or otherwise malfunction. Like Dian Fossey, who preferred her gorillas in the mist to humans, Calvin has decided that robots, by design, are more decent and humane than their creators.

Asimov wrote I, Robot in 1950; his three laws of robotics have influenced his own work since that time, as well as that of many other science fiction and fantasy writers (see New Laws of Robotics and Zeroth Law of Robotics, Robot series and Robot and Empire).


Asimov, Isaac, I, Robot, 1991 edition available in paperback from Bantam Books, ISBN 0553294385

See also: www.bookworm.com.au/bt000154.htm www.asimovonline.com/