Suspicion iarc-2 Read online




  Suspicion

  ( Isaac Asimov's Robot City - 2 )

  Mike Mcquay

  Mike McQuay

  Suspicion

  Isaac Asimov's Robot City. Book 2

  For Brian Shelton And the “bruised banana”

  The Laws Of Humanics

  by Isaac Asimov

  I am pleased by the way in which the Robot City books pick up the various themes and references in my robot stories and carry on with them.

  For instance, my first three robot novels were, essentially, murder mysteries, with Elijah Baley as the detective. Of these first three, the second novel, The Naked Sun, was a locked-room mystery, in the sense that the murdered person was found with no weapon on the site and yet no weapon could have been removed either.

  I managed to produce a satisfactory solution but I did not do that sort of thing again, and I am delighted that Mike McQuay has tried his hand at it here.

  The fourth robot novel, Robots and Empire, was not primarily a murder mystery. Elijah Baley had died a natural death at a good, old age, the book veered toward the Foundation universe so that it was clear that both my notable series, the Robot series and the Foundation series, were going to be fused into a broader whole. (No, I didn’t do this for some arbitrary reason. The necessities arising out of writing sequels in the 1980s to tales originally written in the 1940s and 1950s forced my hand.)

  In Robots and Empire, my robot character, Giskard, of whom I was very fond, began to concern himself with “the Laws of Humanics,” which, I indicated, might eventually serve as the basis for the science of psychohistory, which plays such a large role in the Foundation series.

  Strictly speaking, the Laws of Humanics should be a description, in concise form, of how human beings actually behave. No such description exists, of course. Even psychologists, who study the matter scientifically (at least, I hope they do) cannot present any “laws” but can only make lengthy and diffuse descriptions of what people seem to do. And none of them are prescriptive. When a psychologist says that people respond in this way to a stimulus of that sort, he merely means that some do at some times. Others may do it at other times, or may not do it at all.

  If we have to wait for actual laws prescribing human behavior in order to establish psychohistory (and surely we must) then I suppose we will have to wait a long time.

  Well, then, what are we going to do about the Laws of Humanics? I suppose what we can do is to start in a very small way, and then later slowly build it up, if we can.

  Thus, in Robots and Empire, it is a robot, Giskard, who raises the question of the Laws of Humanics. Being a robot, he must view everything from the standpoint of the Three Laws of Robotics - these robotic laws being truly prescriptive, since robots are forced to obey them and cannot disobey them.

  The Three Laws of Robotics are:

  1 - A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2 - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  Well, then, it seems to me that a robot could not help but think that human beings ought to behave in such a way as to make it easier for robots to obey those laws.

  In fact, it seems to me that ethical human beings should be as anxious to make life easier for robots as the robots themselves would. I took up this matter in my story “The Bicentennial Man,” which was published in 1976. In it, I had a human character say in part:

  “If a man has the right to give a robot any order that does not involve harm to a human being, he should have the decency never to give a robot any order that involves harm to a robot, unless human safety absolutely requires it. With great power goes great responsibility, and if the robots have Three Laws to protect men, is it too much to ask that men have a law or two to protect robots?”

  For instance, the First Law is in two parts. The first part, “A robot may not injure a human being,” is absolute and nothing need be done about that. The second part, “or, through inaction, allow a human being to come to harm,” leaves things open a bit. A human being might be about to come to harm because of some event involving an inanimate object. A heavy weight might be likely to fall upon him, or he may slip and be about to fall into a lake, or any one of uncountable other misadventures of the sort may be involved. Here the robot simply must try to rescue the human being; pull him from under, steady him on his feet and so on. Or a human being might be threatened by some form of life other than human - a lion, for instance - and the robot must come to his defense.

  But what if harm to a human being is threatened by the action of another human being? There a robot must decide what to do. Can he save one human being without harming the other? Or if there must be harm, what course of action must he pursue to make it minimal?

  It would be a lot easier for the robot, if human beings were as concerned about the welfare of human beings, as robots are expected to be. And, indeed, any reasonable human code of ethics would instruct human beings to care for each other and to do no harm to each other. Which is, after all, the mandate that humans gave robots. Therefore the First Law of Humanics from the robots’ standpoint is:

  1 - A human being may not injure another human being, or, through inaction, allow a human being to come to harm.

  If this law is carried through, the robot will be left guarding the human being from misadventures with inanimate objects and with non-human life, something which poses no ethical dilemmas for it. Of course, the robot must still guard against harm done a human being unwittingly by another human being. It must also stand ready to come to the aid of a threatened human being, if another human being on the scene simply cannot get to the scene of action quickly enough. But then, even a robot may unwittingly harm a human being, and even a robot may not be fast enough to get to the scene of action in time or skilled enough to take the necessary action. Nothing is perfect.

  That brings us to the Second Law of Robotics, which compels a robot to obey all orders given it by human beings except where such orders would conflict with the First Law. This means that human beings can give robots any order without limitation as long as it does not involve harm to a human being.

  But then a human being might order a robot to do something impossible, or give it an order that might involve a robot in a dilemma that would do damage to its brain. Thus, in my short story “Liar!,” published in 1940, I had a human being deliberately put a robot into a dilemma where its brain burnt out and ceased to function.

  We might even imagine that as a robot becomes more intelligent and self-aware, its brain might become sensitive enough to undergo harm if it were forced to do something needlessly embarrassing or undignified. Consequently, the Second Law of Humanics would be:

  2 - A human being must give orders to a robot that preserve robotic existence, unless such orders cause harm or discomfort to human beings.

  The Third Law of Robotics is designed to protect the robot, but from the robotic view it can be seen that it does not go far enough. The robot must sacrifice its existence if the First or Second Law makes that necessary. Where the First Law is concerned, there can be no argument. A robot must give up its existence if that is the only way it can avoid doing harm to a human being or can prevent harm from coming to a human being. If we admit the innate superiority of any human being to any robot (which is something I am a little reluctant to admit, actually), then this is inevitable.

  On the other hand, must a robot give up its existence merely in obedience to an order that might be trivial, or even malicious? In “The Bicentennial Man,” I have some hoodlums deliberately ord
er a robot to take itself apart for the fun of watching that happen. The Third Law of Humanics must therefore be:

  3 - A human being must not harm a robot, or, through inaction, allow a robot to come to harm, unless such harm is needed to keep a human being from harm or to allow a vital order to be carried out.

  Of course, we cannot enforce these laws as we can the Robotic Laws. We cannot design human brains as we design robot brains. It is, however, a beginning, and I honestly think that if we are to have power over intelligent robots, we must feel a corresponding responsibility for them, as the human character in my story “The Bicentennial Man” said.

  Certainly in Robot City, these are the sorts of rules that robots might suggest for the only human beings on the planet, as you may soon learn.

  Chapter 1. Parades

  It was sunset in the city of robots, and it was snowing paper.

  The sun was a yellow one and the atmosphere, mostly nitrogen/oxygen blue, was flush with the veins of iron oxides that traced through it, making the whole twilight sky glow bright orange like a forest fire.

  The one who called himself Derec marveled at the sunset from the back of the huge earthmover as it slowly made its way through the city streets, crowds of robots lining the avenue to watch him and his companions make this tour of the city. The tiny shards of paper floated down from the upper stories of the crystal-like buildings, thrown (for reasons that escaped Derec) by the robots that crowded the windows to watch him.

  Derec took it all in, sure that it must have significance or the robots wouldn’t do it. And that was the only thing he was sure of-for Derec was a person without memory, without notion of who he was. Worse still, he had come to this impossible world, unpopulated by humans, by means that still astounded him; and he had no idea, no idea, of where in the universe he was.

  He was young, the cape of manhood still new on his shoulders, and he only knew that by observing himself in a mirror. Even his name-Derec-wasn’t really his. It was a borrowed name, a convenient thing to call himself because not having a name was like not existing. And he desperately wanted to exist, to know who, to know what he was.

  And why.

  Beside him sat a young woman called Katherine Burgess, who had said she’d known him, briefly, when he’d had a name. But he wasn’t sure of her, of her truth or her motivations. She had told him his real name was David and that he’d crewed on a Settler ship, but neither the name nor the classification seemed to fit as well as the identity he’d already been building for himself; so he continued to call himself by his chosen name, Derec, until he had solid proof of his other existence.

  Flanking the humans on either side were two robots of advanced sophistication (Derec knew that, but didn’t know how he knew it). One was named Euler, the other Rydberg, and they couldn’t, or wouldn’t, tell him any more than he already knew-nothing. The robots wanted information from him, however. They wanted to know why he was a murderer.

  The First Law of Robotics made it impossible for robots to harm human beings, so when the only other human inhabitant of Robot City turned up dead, Derec and Katherine were the only suspects. Derec’s brief past had not included killing, but convincing Euler and Rydberg of that was not an easy task. They were being held, but treated with respect-innocent, perhaps, until proven guilty.

  Both robots had shiny silver heads molded roughly to human equivalent. Both had glowing photocells where eyes would be. But where Euler had a round mesh screen in place of a human mouth, Rydberg had a small loudspeaker mounted atop his dome.

  “Do you enjoy this, Friend Derec?” Euler asked him, indicating the falling paper and the seemingly endless stream of robots that lined the route of their drive.

  Derec had no idea of what he was supposed to enjoy about this demonstration, but he didn’t want to offend his hosts, who were being very polite despite their accusations. “It’s really… very nice,” he replied, brushing a piece of paper off his lips.

  “Nice?” Katherine said from beside him, angry. “Nice?” She ran fingers through her long black hair. “I’ll be a week getting all this junk out of my hair.”

  “Surely it won’t take you that length of time,” Rydberg said, the speaker on his head crackling. “Perhaps there’s something I don’t understand, but it seems from a cursory examination that it shouldn’t take you any longer than… ”

  “All right,” Katherine said. “All right.”

  “… one or two hours. Unless of course you’re speaking microscopically, in which case… ”

  “Please,” she said. “No more. I was mistaken about the time.”

  “Our studies of human culture,” Euler told Derec, “indicate that the parade is indigenous to all human civilizations. We very much want to make you feel at home here, our differences notwithstanding.”

  Derec looked out on both sides of the huge, open-air, V-shaped mover. The robots lining the streets stood quite still, their variegated bodies giving no hint of curiosity, though Derec felt it quite possible that he and Katherine were the first humans many of them had ever seen. Knowing nothing, Derec knew nothing of parades, but it seemed to be a friendly enough ritual, except for the paper, and it made him feel good that they should want him to feel at home.

  “Is it not customary to wave?” Euler asked.

  “What?” Derec replied.

  “To wave your arm to the crowd,” Euler explained. “Is it no customary?”

  “Of course,” Derec said, and waved on both sides of the machine that clanked steadily down the wide street, the robots returning the gesture with more nonreadable silence.

  “Don’t you feel like a proper fool?” Katherine asked, scrunching up her nose at his antics.

  “They’re just trying to be hospitable,” Derec replied. “With the trouble we’re in here, I don’t think it hurts to return a friendly gesture.”

  “Is there some problem, Friend Katherine?” Euler asked.

  “Only with her mouth,” Derec replied.

  Rydberg leaned forward to stare intently at Katherine’s face. “Is there something we can do?”

  “Yeah,” the girl answered. “Get me something to eat. I’m starving.”

  Rydberg swiveled his head toward Euler. “Another untruth,” he said. “This is very discouraging.”

  “What do you mean?” Derec asked.

  “Our hypotheses concerning the philosophical nature of humanics,” Rydberg said, “must have their foundation in truth among species. Twice Katherine has said things that aren’t true… ”

  “I am starving!” Katherine complained.

  “… and how can any postulate be universally self-evident if the postulators do not adhere to the same truths? Perhaps this is the mark of a murderer.”

  “Now wait a minute,” Derec said. “All humans make… creative use of the language. It’s no proof of anything.”

  Rydberg examined Katherine’s face closely. Then he pressed a pincer to her bare arm, the place turning white for a second before resuming its natural color. “You say you are starving, but your color is good, your pulse rate strong and even, and you have no signs of physical deterioration. I must conclude, reluctantly, that you are not starving.”

  “We are hungry, though,” Derec said. “Please take us where we might eat.”

  Katherine fixed him with a sidelong glance. “And do it quickly.”

  “Of course,” Euler said. “You will find that we are fully equipped to deal with any human emergency here. This is to be the perfect human world.”

  “But there are no humans here,” Derec said.

  “No.”

  “Are you expecting any?”

  “We have no expectations.”

  “Oh.”

  Euler directed the spider-like robot guiding the mover, and the machine turned dutifully at the next corner, taking them down a double-width street that was bisected by a large aqueduct, whose waters had turned dark under the ever-deepening twilight.

  Derec sat back and stole a glance at Kat
herine, but she was busily pulling bits of paper from her hair and didn’t notice him. He had a million questions, but they seemed better left for later. As it was, he had conflicting emotions to analyze and react to within himself.

  He was a nonperson whose life had begun scant weeks before, when he’d awakened without past or memory to find himself in a life-support pod, stranded upon an asteroid that was being mined by robots. They had been searching for something, something he had accidentally stumbled upon-the Key to Perihelion, at least one of the seven Keys to Perihelion. It had seemed of incredible import to the robots on the asteroid. Unfortunately, he had had no idea of what the Keys to Perihelion were or what to do with them.

  After that was the bad time. The asteroid was destroyed by Aranimas, an alien space raider, who captured Derec and tortured him for information about the Key, information that Derec could not supply. There he had met Katherine, just before the destruction of Aranimas’s vessel and their dubious salvation at the hands of the Spacers’ robots.

  The Spacers also wanted the Key, though their means of attaining it seemed slightly more civilized and bureaucratic than Aranimas’s. Katherine and Derec were polite prisoners of bureaucracy for a time on Rockliffe Station, their personalities clashing until they were forced to form an alliance with Wolruf, another alien from Aranimas’s ship, to escape their gentle captivity with the Key.

  They found that if they pressed the corners of the silver slab and thought themselves away from the Spacer station, they were whisked bodily to a dark gray void that they assumed to be Perihelion. Pressing the corners again, another thought brought them to Robot City. And then their thinking took them no farther, stranding them in a world populated by nothing but robots.

  And that was it, the sum total of Derec’s conscious life. He had reached several conclusions, though, scant as his reserve of information was: First, he had an innate knowledge of robots and their workings, though he had no idea from where his knowledge emanated; next, Katherine knew more about him than she was willing to tell; finally, he couldn’t escape the feeling that he was here for a purpose, that this was all some elaborate test designed especially for him.