Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Sep 27 2000 - 03:31:17 MDT


J. R. Molloy writes:

> Well, they *shall* be of vastly differing complexities... when they do actually
> emerge.
 
What part of "phyle radiation" you don't understand?
 
> Meanwhile, we can use the fact that the AI is not homogenous to choose the most
> friendly of the bunch as replicators or parents of future generations of AI.

Aaargh. The only AI that counts passes through the diversity
bottleneck by virtue of the positive autofeedback self-enhancement
loop. This is the factor which wipes out all other contestans. Coming
close second, still no cigar.

Only then it radiates.

> This will, of course, insure that they remain friendly.
 
Of course. In your fairy-tale universe. A little bit too much is at
stake to count on that.

> Friendlies magnify friendliness by terminating unfriendlies.
 
Aargh^2. Only the one AI which falls into the autofeedback loop
counts. Tell why that one shall remain friendly. And "terminating"
sounds rather unfriendly to me in a coevolutionary scenario. Sounds
like begging to have your ass kicked.

> "A technophobic view of the future places humans on the same highway to
> extinction [as the dinosaurs] driven there not by a cataclysmic meteor crash,
> but by the impact of robots with intelligence vastly superior to that of purely
> biological mankind."

Apart from molecular autoreplicators, that's the only technology I can
right now think of which can really end humankind. If I'm
"technophobic", the D'Alusio is suicidal. If I have to choose between
"technophobic" and "suicidal" I choose technophobic every time.

I like guns, but this doesn't mean I have to play russian roulette.

> --D'Aluisio



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:14 MDT