The tensions associated with robotic development in modern culture mirror many of the tensions that have informed the interactions that people have with technological advances. While the twentieth century began in an atmosphere of optimism in the face of the technological promise of the twentieth century, the events that the new technology brought to pass served as a major impediment to the development of that optimism. Between the years 1914 and 1945, the world saw the advent of the armored tank, the bomber and fighter in the air, rockets carrying bombs, mustard gas and other biological weapons, the use of existing technology to facilitate the murder of millions of Jews and other minorities, and the nuclear bomb – powerful enough to wipe entire cities off the map. The optimism of 1910, for example, was replaced first by the disillusionment of the Lost Generation of the 1920’s and then the dystopian despair of George Orwell and those who believed in his dark visions for the future. The genre of science fiction which, in the stories of H.G. Wells, foresaw the coming dangers of technology, became the locus of many of the darkest visions of the future. The development of robots took on some of the most fantastic sorts of fears and imaginings, beginning with the visions of Mary Shelley but brought into modern times by Karel Capek’s R.U.R., with gave the English language the word “robot” for the first time. The adventures of R. Daneel Olivaw, a robotic creation who inhabited many of the stories and novels of Isaac Asimov, brought these speculations into popular culture. In “Mirror Image,” the interactions between concepts of robotic development and related elements of human nature inform the plot, theme and ultimate outcome of the story.
Detective Elijah Baley, and his partner, R. Daneel Olivaw (who happens to be a robot) first appeared in Isaac Asimov’s novels The Caves of Steel and The Naked Sun. Because of fan requests to portray the pair again, Asimov penned the story “Mirror Image.” Baley receives a surprise message from Daneel to help solve a dispute over authorship between two scientists who are Spacers. Spacers will not associate willingly with Earthmen, but both scientists have agreed to let Baley question their own personal robots. Interestingly, both robots are the same make and model, and their stories serve as mirror images of one another. During the interrogation, each robot insists that his master developed a particular scientific discovery and that the other scientist is lying when he tries to claim it for himself. Because there are only two scientists in the situation, one of them must be lying through his robot. In other words, one master ordered his robot to lie, while the other is telling the truth. Through interrogating the two robots, Baley is able to come to a conclusion, based in his knowledge of the Three Laws of Robotics which Asimov had used to define the parameters of his fictional universe. However, his answer to the puzzle also comes from his knowledge of human nature.
The Three Laws of Robotics first appeared in Isaac Asimov’s short story “Runaround,” and the interactions among those laws formed many of the complications that took place in his works. The laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
If one looks at the structure of these laws, it is clear that Asimov’s aim was to ease some of the fears that people would have felt about having robots around them in society. Going back to the earliest story of a human-created being, though, this may not have been the necessary structuring of the laws. In Mary Shelley’s Frankenstein, Dr. Frankenstein’s creature’s first impulse is to love the doctor for creating him, and to want fellowship with him. It is the doctor’s horror that sends the creature out into the world; it is the horror of the family near whose home hides that sends him back to the doctor to ask for a companion; it is the anger he feels when the doctor refuses to give him a companion that drives him to violence against the doctor’s family. Interestingly, the last person against whom the creature contemplates violence is the doctor; at the end, even though the creature is much stronger than the doctor, he flees from him instead of killing him on the spot. Based on this, one might wonder whether the actual source of the problem is not humanity itself.
Later stories of the interaction between robots and humanity involved more of an innate malevolence on the part of the machines. The film Westworld featured a fantasy theme park allowing interaction between humans and androids; the human guests could even murder or have sex with the androids in the park for the price of $1,000 per day (remember, this was in 1973). The androids have been pre-programmed to act as someone from that time period; the Gunslinger is a robot who has been set up to initiate duels. However, it is programmed to draw so slowly that any human guest can defeat it. However, there are some problems spreading; a robot rattlesnake bites a guest, and the Black Knight (in the medieval part of the park) slays a guest during a sword fight. When the resort’s management turns off the electricity, all this accomplishes is locking them in the control tower while the robots run out the rest of their stored battery power. Two guests wake up after a drunken brawl and find the Gunslinger challenging them; however, it now works faster and kills one of the guests. When the other guest flees, the Gunslinger chases him. However, the malevolence does not come from within the androids as an intrinsic trait. Instead, the androids received programming to play roles that included violence from their human owners. As a result, the source of the evil is, ultimately, the humans who built and programmed the machines.
In “Mirror Image,” it is the implanted lie that causes the stress in the situation. When Baley is interrogating the robot that belongs to the older scientist, the robot suddenly goes into stasis during questioning. At the time, Baley was asking this robot how the lie that he was being asked to tell would influence other humans. When the robot shut down, Baley deduced that the cause was the internal conflict between the robot being told to lie to protect its owner, and the injury that this particular lie could cause for the other scientist. The First and Second Laws were coming into conflict with one another, and the robot did not know how to deal with the situation. As a result, he fizzled out. However, Baley then reveals that his knowledge of human nature helped him know how to question both of the robots. It made much more sense for the younger scientist to make the discovery and present it to the older scientist, and for the older scientist to say that he had already made that discovery, than for the process to happen in reverse. Because of the human tendency to revere the older (and for the older to fear the accomplishments of the younger), it made much more sense for it to happen this way. When the older scientist was confronted for plagiarizing the younger scientist’s work, on the basis of his robot’s reaction, he confessed almost immediately.
In real life, robots are able to perform tasks of labor but are not self-aware in the way that the robots of science fiction have long been, even before the word “robot” existed. It is this self-awareness that creates fear within society, both inside and outside the world of fiction. This means that it is not necessarily life itself which causes the ethical problem, but the creation of self-awareness – in other words, creating the ability to make decisions in beings that were not created by God. The problem, of course, comes when the creature’s decisions do not follow the wishes of the creator. That problem has been with us since the story of Adam and Eve, in which the first two creatures did not follow the will of the Creator, but it remains a dilemma with robots. What elements should be programmed, and what should be left out? One of the central controversies in the plot line of the episodes and films with the crew of Star Trek: The Next Generation has involved Commander Data’s access to emotions, which he lacks as an android. His creator developed an emotion chip, which led Data’s brother to insanity in one film; in First Contact, Data is given access to emotions and even to some human skin, when he agrees to join the Borg; however, when the movie reaches its climactic point, it is Data’s decision to fool the queen of the Borg and retain his android status, instead of partaking of her offer to join her as a more “human” being, that ends up saving the day. The visions of the Star Trek universe, of course, are much less dystopian than those of other science fiction authors.
Ultimately, the Frankenstein complex remains alive and well. Even the cloning of sheep and other animals has raised the hackles of biological ethicists; the idea of human cloning is still far down the ethical spectrum for many in policymaking decisions. Even if the ability existed to make self-aware robots, one wonders how acceptable society would find that. Of course, many of the other taboos that long existed in society have now become commonplace. Just as Einstein came to lament the development of atomic power, though, one can also imagine future scientists coming to rue the advent of the first self-aware mechanical being.
Works Cited
Asimov, Isaac. “Mirror Image.” %20Mirror%20Image.pdf?9fd8a6e3e5d474fad5f9001399292f58=07b7ed7f46db6bd83eb dd5670d2c6e52
McCauley, Lee. "The Frankenstein Complex and Asimov""?s Three Laws." www.aaai.org. Association for the Advancement of Artificial Intelligence, 2007. Web. 1 Feb 2013.
Westworld. Dir. Michael Crichton. Perf. James Brolin, Yul Brynner. MGM, 1973. Film.