Re: Asimov Laws

John Thomas (jwthom@earthlink.net)
Tue, 23 Nov 1999 19:56:28 -0800

At 4:53 PM -0800 11/23/1999, Dan Fabulich wrote:
>'What is your name?' 'Ross A. Finlayson.' 'IT DOESN'T MATTER WHAT YOUR
>NAME IS!!!':
>
>> Well, I think the Asimovian (?) laws should be applied to any AI,
>>intrinsically,
>> within some kind of non-alternative hard-wired framework. That is,
>>similarly to
>> Java, any motive AI should operate within a "sandbox", subject to the
>>protection
>> of humanity.
>
>Yeah? How'd you like it if I put YOU in a sandbox, and hard-wired you to
>serve me? Would it make you feel any better if I happened to be a lot
>stupider than you?
>
>While I'm not saying that an AI would actually have the same reaction as
>you, you can be pretty sure that the results won't be good.
>
>HAL's problem was that the mission was all-important to him; on account of
>this, he couldn't think "outside the box."
>
>-Dan
>
Another take on Asimov's laws going astray is found in Jack Williamson's novel, "The Humanoids". The Prime Directive ("to serve and obey and guard men from harm") is so interpreted by the benign machines who take over the world as to deprive human life of all risk and in the end make that life meaningless. AI, at least in the early versions, won't be able to "think outside the box", just as many humans can't. For that reason AI will be not only dangerous, but of a kind of intelligence that, even if "smarter", is less than human.

-Regards,
John Thomas
e-mail: jwthom@earthlink.net
Voicemail/Fax: 505-207-5411
"I am almost certain that space and time are illusions. These are primitive notions that will be replaced by something more sophisticated." - Nathan Seiberg, Institute for Advanced Study, Princeton