Re: >H RE: Present dangers to transhumanism

John Clark (jonkc@worldnet.att.net)
Thu, 9 Sep 1999 13:32:01 -0400

Eliezer S. Yudkowsky <sentience@pobox.com>

   >>Me:
   >>Rightly or wrongly many people, myself included,

>> are certain that the universe can not supply us with meaning,
>>and yet that idea does not drive us insane

>You aren't a self-modifying AI, buckaroo.

It's true that I haven't changed my hardware but sometimes I try to overclock it a little and I have been known to modify and upgrade my software from time to time. I'm still sane. Mostly.

>It doesn't strike me as being tremendously stable to have no reference
> for what the contents of your mind should be except the current
> contents of your mind.

Apparently it's stable enough. Godel showed that nothing has a perfect foundation, anything you build on will be incomplete or inconsistent or both, and yet we manage. The English language is a far greater muddle than the formal logic Godel dealt with and it has almost no foundation at all yet it works pretty well most of the time.

Any mind, artificial or not, is going to need pleasure and pain subsystems, the probability that an AI would deliberately modify such a subsystem so it would freeze up into an orgasm of existential melancholy is almost zero. The reason why you'd want to hard wire the poor machine to do such a thing is utterly beyond me.

If you want something to worry about it's the machine modifying itself in the other direction so it's always happy, very happy, like a junkie with a unlimited supply of Heroin. I guess you'd have to hard wire the machine to be a little bit squeamish about modifying its own brain, not too squeamish obviously but a little. Finding this happy medium might not be easy, I worry that it might not be possible.

> I am *not* going to construct an AI that thinks the contents
>of its mind are facts.

It's a fact, I don't like sweet potatoes. You can trust me on this because I am the world's greatest expert of what John K Clark likes and does not.

>>Me:
>> You might argue that even though I can't be proven wrong I still
>> might be wrong, well maybe.

>Nonsense. I think you can be proven wrong.

How? I am certain that meaning can not exist without mind but you're going to prove that I'm wrong and I really think that meaning is independent of mind. Good luck. Are you going to prove that I really like sweet potatoes too?

>I'd say a brain that correctly predicts and successfully manipulates
>reality is constructed well

I agree.

>but only a brain that incarnates the logic of the Universe is real.

I'm not sure I understand you but if the logic of the universe produces a brain in a coma and I can use my own logic to produce a brain that manipulates reality then I prefer a brain that is not real, and that's a fact.

John K Clark jonkc@att.net