Re: Why would AI want to be friendly?

From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Sun Sep 24 2000 - 15:32:36 MDT


----- Original Message -----
From: "Darin Sunley" <rsunley@escape.ca>

> Given that humans are not infinitely complex, it is not an
> impossible task to discover the deterministic rules governing human
> behaivior.

You don't need to be infinitely complex to be non-determinisitic.

Determinism is fine for classical models of reality-- but frankly I suspect
that reality is probabilistic-- and intelligence is probably an example of
emergent higher order adaptive structures based on fine grained uncertain
processes. (I'm aware that the jury is still out on this however.)

I think that good AI will be aware of these probabilities to an arbitrarily
accurate degree and this will give them a lot of power-- particularly as
their ability to calculate these probabilities extrapolates out into long
term predications. But a probabilistic model of reality often fails to
confer much power-- I might be aware of the precise odds of winning a
particular spin of the roulette wheel, but be unable to leverage that
knowledge into power

>
> Now, convincing her within 30 seconds could very well be impossible, just
> like you cannot overwrite 64 megabytes of memory with all 0s in less then
64
> million (or so) fetch-execute cycles. The n-dimensional Hamming distance
> between those two mind-states may be too far to bridge using only 30
seconds
> of vocal input. But if you eliminate the time constraint, and give me,
say,
> 6 months to get her to do a convincing chimp imitation, then again, given
> that simulation ability, I don't think it's an impossible task at all.
>

Skepticism is the firewall against the system attack you're describing here.

--

::jason.joel.thompson:: ::founder::

www.wildghost.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT