Re: >H RE: Present dangers to transhumanism

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 08 Sep 1999 14:04:03 -0500

John Clark wrote:
>
> Eliezer S. Yudkowsky <sentience@pobox.com> On September 01, 1999 Wrote:
>
> >I'm serious - Elisson, in _Coding a Transhuman AI_, contains design
> >features deliberately selected to cause a "collapse of the goal system",
> >and lapse into quiesence, in the event existence is found to be
> >meaningless. Preventing those nihilist AIs you were talking about.
>
> You can't be serious. Rightly or wrongly many people, myself included,
> are certain that the universe can not supply us with meaning, and yet
> that idea does not drive us
insane, I haven't murdered anyone in months.

You aren't a self-modifying AI, buckaroo. It doesn't strike me as being tremendously stable to have no reference for what the contents of your mind should be except the current contents of your mind. If an AI thinks its opinions are facts, its view of the world will probably spaghetti off into chaos or some strange attractor. If an AI thinks its motivations are the only arbiter of its motivations, you have a similar stability problem.

> If something has no meaning and you don't like that then don't go into
> a coma, change the situation and give it a meaning. You can give
> a meaning to a huge cloud of hydrogen gas a billion light years away
> but it can't give meaning to you because meaning is generated by mind.
> Personally I like this situation because I might not like the meaning
> that the universe assigns to me, I'd much rather be the boss and
> assign a meaning to the universe.

Perhaps. But I'm not sure that assigning meaning is permitted. I'm not sure, absolutely, that meaning is subjective. So I don't dare try.

I don't trust human logic at all. The only real logic is the logic of the Universe, the logic that created reality. If you prove that the sky is blue, did that *make* it blue? No. But there's some kind of logic that leads to the conclusion that "something exists", and lo and behold something exists. *That's* what I call logic; logic so forceful it can actually bring its conclusions into reality. That's a logic I can trust. We can't use that logic, so I don't trust any human reasoning at all until we can upgrade ourselves to understand that logic.

*Then* we will actually know things. Then, maybe, we'll be able to actually be certain about things, instead of just manipulating probabilities. And we'll be certain because the conclusions themselves are strong enough to bring things into existence. Maybe we'll even be able use the logic to change reality.

Until then, I am *not* going to construct an AI that thinks the contents of its mind are facts. What happens if they're not? What happens if that irresistable logic comes into conflict with conclusions the AI has been taught are arbitrary? I am going to construct an AI that seeks out the logic of the Universe and incorporates it and applies it to absolutely everything; *if* it turns out that logic has no effect on motivations, *then* we can start worrying about how to construct a stable AI with arbitrary motivations.

> You might argue that even though I can't be proven wrong I still
> might be wrong, well maybe.

Nonsense. I think you can be proven wrong.

> But if being right means death or
> insanity and being wrong means operating in a efficient and happy
> manner then whatever could you mean by right and wrong?

True and false. I don't care how happy and efficient it makes you to believe the sky is green, and I don't care if death or insanity follows from knowing the sky is blue. The sky *is* blue. It is *not* green.

> I'd
> say a happy efficient brain is constructed correctly and a insane or
> quiescent brain is constructed incorrectly, but again that's just my
> opinion, I like things that work

I'd say a brain that correctly predicts and successfully manipulates reality is constructed well, but only a brain that incarnates the logic of the Universe is real.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way