Re: >H RE: Present dangers to transhumanism

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 01 Sep 1999 22:17:01 -0500

"Robert J. Bradbury" wrote:
>
> Is it possible for non-transhumans to discuss transhumanist philosophy?

Well, certainly your record in the area is nothing to be proud of.

Just kidding. Yes, it is possible. I've seen at least three people on this list do it consistently and reliably, and many others have done it at least once that I've seen. And practically half the posters on the list are capable of saying something useful on the subject.

> Now, if the other possibility that seems to fit the available data
> is -- become an SI/AI, think about a philosopy for existence,
> realize that there is no philosopy (because survival *was*
> the purpose for existence and once survival is guaranteed,
> existence is pointless); dismantle yourself back into atoms.

That's not a bug, it's a feature.

I'm serious - Elisson, in _Coding a Transhuman AI_, contains design features deliberately selected to cause a "collapse of the goal system", and lapse into quiesence, in the event existence is found to be meaningless. Preventing those nihilist AIs you were talking about.

> This seems to fit the paradox of "Why do we still see stars?".

Not really. You don't have to be a Power to colonize the Universe.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way