Re: Why would AI want to be friendly?

From: Darin Sunley (rsunley@escape.ca)
Date: Sun Sep 24 2000 - 13:30:26 MDT


-----Original Message-----
From: hal@finney.org <hal@finney.org>
To: extropians@extropy.org <extropians@extropy.org>
Date: Sunday, September 24, 2000 2:03 PM
Subject: Re: Why would AI want to be friendly?

>Not true. I view a gun in the process of firing at me as a deterministic
>process, but guess what, it's going to win. Just because something is
>deterministic doesn't mean that I can control it.
>

Think one level farther up. If a gun is in the process of firing at you, you
have already completely failed to predict the behavior of the agent firing
the gun. Now, obviously none of US can treat a human being as a
deterministic process, but what if the gun is being fired by a simple robot?
All it does is run in circles, and fire the gun from the same spot, at the
same target, every 30 seconds or so. You can quite easily determine the
pattern in that robot's behaivior and never be at risk from the gun. The
difference between the behaivior of a human and that robot is nothing more
then complexity. Given that humans are not infinitely complex, it is not an
impossible task to discover the deterministic rules governing human
behaivior.

>Believers in the omnipotence of AIs seem to think that for any given
>person, in any given situation, there is some input they can be given
>which will cause them to produce any desired output. If I see a nun
>in the middle of prayer in the chapel, there is something I can say to
>her that will make her immediately start screeching like a chimp while
>jumping around the room scratching herself.

You know, given a trillion high fidelity simulations of that nun to test
possible responses, I bet I could construct a sentence that would do just
that. My first order approximation is that it would involve me claiming to
be sent from God, combined with a thourough demonstration of omniscience
with respect to her life up to that point, all of which is easily acheivable
given those simulations, and an arbitrary amount subjective time to think
about it.

Now, convincing her within 30 seconds could very well be impossible, just
like you cannot overwrite 64 megabytes of memory with all 0s in less then 64
million (or so) fetch-execute cycles. The n-dimensional Hamming distance
between those two mind-states may be too far to bridge using only 30 seconds
of vocal input. But if you eliminate the time constraint, and give me, say,
6 months to get her to do a convincing chimp imitation, then again, given
that simulation ability, I don't think it's an impossible task at all.

Darin Sunley
rsunley@escape.ca



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:48 MDT