nly_ if it does _not_ completely recover from the
effects of an outside force imposed upon it. If it recovers completely,
it's just as it was before. Consequently, it hasn't learned anything.
The organism _must change_."
He rubbed the bridge of his nose and looked out over the faces of the
men before him. A faint smile came over his wrinkled features.
"Some of you, I know, are wondering why I am boring you with this long
recital. Believe me, it's necessary. I want all of you to understand
that the machine you will have to take care of is not just an ordinary
computer. Every man here has had experience with machinery, from the
very simplest to the relatively complex. You know that you have to be
careful of the kind of information--the kind of external force--you give
a machine.
"If you aim a spaceship at Mars, for instance, and tell it to go
_through_ the planet, it might try to obey, but you'd lose the machine
in the process."
A ripple of laughter went through the men. They were a little more
relaxed now, and Fitzhugh had regained their attention.
"And you must admit," Fitzhugh added, "a spaceship which was given that
sort of information might be dangerous."
This time the laughter was even louder.
"Well, then," the roboticist continued, "if a mechanism is capable of
learning, how do you keep it from becoming dangerous or destroying
itself?
"That was the problem that faced us when we built Snookums.
"So we decided to apply the famous Three Laws of Robotics propounded
over a century ago by a brilliant American biochemist and philosopher.
"Here they are:
"'_One: A robot may not injure a human being, nor, through inaction,
allow a human being to come to harm._'
"'_Two: A robot must obey the orders given it by human beings except
where such orders would conflict with the First Law._'
"'_Three: A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law._'"
Fitzhugh paused to let his words sink in, then: "Those are the ideal
laws, of course. Even their propounder pointed out that they would be
extremely difficult to put into practice. A robot is a logical machine,
but it becomes somewhat of a problem even to define a human being. Is a
five-year-old competent to give orders to a robot?
"If you define him as a human being, then he can give orders that might
wreck an expensive machine. On the other hand, if you don't define the
five-year-old a
|