Re: Why would AI want to be friendly?

From: Eugene Leitl (
Date: Mon Oct 02 2000 - 03:47:07 MDT

Eliezer S. Yudkowsky writes:
> I seem to recall that when I tried to tell *you* that, you poured out buckets
> of objections upon my shrinking head. I don't recall you cheering on

I can't recall having put forth objections against that particular
point, but if I ever did, I was obviously mistaken.

While I wouldn't bet my life on it, this seems one of the punctuated
equlibrium things if I ever saw one since the primeval autocatalystic
set (which must also have burned like wildfire through the prebiotic

> self-enhancement back when I was alone defending that argument against the
> legions of Hansonians.
I don't consciously recall that periode, but the Hansonians are very
probably wrong. Unless we both are wrong, of course.

> > Only then it radiates.
> Excuse me, but I really have to ask: Why? What particular supergoal makes
> phylum radiation a good subgoal? It isn't a subgoal of being friendly, a

It is one of these emergent properties. As soon as you're in a
Darwinian regime you can't do otherwise. As soon you have to copy
things imperfectly in face of limited resources you're there.

I do not see how you can break out sustainably from Darwinian regime,
having to originate in it and being surrounded by it from all places,
including stuff emerging in your very own belly. You say you can strip
it, using the initial explosive growth, and due to absolutely highest
fitness (there is no such thing, because fitness function is strongly
dominated by other individuals in population) kick any darwinian
design's ass, whether coming crawling in from the outside (or you
sailing smack into a domain of it in your sysop cloning expansion), or
spontaneously emerged inside (prevented by exquisite brittleness, I
would imagine).

I dunno, no offense, but this sounds ridiculous to me.

> subgoal of personal survival, a subgoal of happiness, a subgoal of attempting
> to maximize any internal programmatic state, or a subgoal of trying to
> maximize any physical state in the external Universe. Phylum radiation is
> cognitively plausible only if the SI possesses an explicit drive for
> reproduction and diversification at the expense of its own welfare.

This presumes the seed never ceases to be a monode during its growth,
and stops growing after it has reached a certain spatial
dimension. (Of course, the chiefest objection is how you make that
seed to grow, and how do you keep it from going off in other places
you don't want, while the thing is out of your hands). Since this
clearly limits its sphere of influence, it either keeps everybody
tucked in, or has to make a remote clone of itself, creating a
population of SIs. By keeping it brittle, the clone never grows
malign, and by design it never infringes on other sysop's domain.

Lots of complex assumptions there.

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT