“Donald!” Fredda said.
He turned and looked at her with a steady gaze. “I regret saying so, Dr. Leving, particularly to you, the author of those Laws, but it is nonetheless true. The New First Law says a robot must not harm a human-but says nothing about preventing harm. A robot with foreknowledge of a murder is under no compulsion to give anyone warning. A robot who witnesses a murder is not compelled to prevent it.
“The New Second Law says a robot must ‘cooperate’ with humans, not obey them. Which humans? Suppose there are two groups of humans, one intent on evil, the other on good? How does a New Law robot choose?
“The New Third Law is the same as the old third-but relative to the weakened First and Second Laws, it is proportionately stronger. A so-called New Law robot will all but inevitably value its own existence far more than any true robot-to the detriment of the humans around it, who should be under its protection.
“As for the New Fourth Law, which says a robot ‘may’ do whatever it likes, the level of contradiction inherent in that statement is remarkable. What does it mean? I grant that the verbal expression of robotic laws is far less exact than the underlying forms of them as structured in a robot’s brain, but even the mathematical coding of the Fourth Law is uncertain. ”
“I meant it to be vague,” Fredda said. “That is, I mean there to be a high level of uncertainty. I grant there is a basic contradiction in a compulsory instruction to act with free will, but I was forced to deal within the framework of the compulsory, hierarchical nature of the first three of the New Laws.”
“But even so,” Donald said. “The Fourth New Law sets up something utterly new in robotics-an intralaw conflict. The original Three Laws often conflict with each other, but that is one of their strengths. Robots are forced to balance the conflicting demands; for example, a human gives an order for some vitally important task that involves a very slight risk of minor harm to the human. A robot that is forced to deal with such conflicts and then resolve them will act in a more balanced and controlled fashion. More importantly, perhaps, it can be immobilized by the conflict, thus preventing it from acting in situations where any action at all would be dangerous.
“But the Fourth New Law conflicts with itself; and I can see no possible benefit in that. It gives semi-compulsory permission for a robot to follow its own desires-although a robot has no desires. We robots have no appetites, no ambitions, no sexual urges. We have virtually no emotion, other than a passion for protecting and obeying humans. We have no meaning in our lives, other than to serve and protect humans-nor do we need more meaning than that.
“The Fourth Law in effect orders the robot to create desires, though a robot has none of the underlying urges from which desires spring. The Fourth Law then encourages-but does not require-the robot to fulfill these synthetic desires. In effect, by not compelling a New Law robot to fulfill its needs at all times, the Fourth Law tells a robot to fulfill its spurious needs part of the time-and thus, it will not fulfill them at other times. It is compelled, programmed, to frustrate itself from time to time.
“A true robot, a Three-Law robot, left to its own devices, without orders or work or a human to serve, will do nothing, nothing at all-and be not at all disturbed by its lack of activity. It will simply wait for orders, and be alert for danger to humans. A New Law robot without orders will be a mass of conflicted desires, compelled to want things it does not need, compelled to seek satisfaction only part of the time.”
“Very eloquent, Donald,” Kresh said. “I don’t like New Law robots any better than you do-but what does it have to do with the case?”
“A great deal, sir. New Law robots want to stay alive-and they know that it is not by any means certain they will do so. Prospero in particular knew that Grieg was considering extermination as a possibility. They might well have decided to act in a misguided form of self-defense. The New Laws would permit them to cooperate with humans and assist in a murder, so long as they did not actually do the killing themselves. Caliban, of course, has no Laws whatsoever. There are no limits to what he might do. There is nothing in robotics to prevent him actually pulling the trigger.”
“A rather extreme view, Donald,” Fredda said, quite surprised by the vehemence of Donald’s arguments.
“It is a rather extreme situation, Dr. Leving. ”
“Do you have any evidence for all of this, aside from elaborate theory-spinning? Do you have any concrete reason for accusing Prospero and Caliban?”
“I have their confession,” Donald said.
“Their what?” Fredda almost shouted.
Donald held up a cautionary hand. “They confessed to blackmail, not murder. However, it is a frequent tactic of criminals to confess to a lesser charger in order to avoid a graver one.”