Re: Yudkowsky's AI (again)

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 24 Mar 1999 19:20:24 -0600

Dan Fabulich wrote:
>
> Again, it seems to me that if subjective meaning is true then I can and
> should oppose building a seed-AI like yours until I, myself, am some kind
> of a power. What's wrong with this argument? It seems like this argument,
> if true, annihilates your theory about the interim meaning of life.

The whole altruistic argument is intended as a supplement to the basic and very practical theory of the Singularity: If we don't get some kind of transhuman intelligence around *real soon*, we're dead meat. Remember, from an altruistic perspective, I don't care whether the Singularity is now or in ten thousand years - the reason I'm in a rush has nothing whatsoever to do with the meaning of life. I'm sure that humanity will create a Singularity of one kind or another, if it survives. But the longer it takes to get to the Singularity, the higher the chance of humanity wiping itself out.

My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015. The most optimistic estimate for project Elisson would be 2006; the earliest nanowar, 2003.

So we have a chance, but do you see why I'm not being picky about what kind of Singularity I'll accept?

I'm not at all sure about objective morality, although I'm fairly sure the human-baseline morality isn't objective fact. And I can conceive of an "objective observer-dependent morality", where the invariants would shift from the correct choice to some more basic quantity, just as the Einsteinian formulation of spacetime made distance variable and interval invariant.

The point is - are you so utterly, absolutely, unflinchingly certain that (1) morality is subjective (2) your morality is correct (3) AI-based Powers would kill you and (4) human Powers would be your friends - that you would try to deliberately avoid an AI-based Singularity?

It will take *incredibly* sophisticated nanotechnology before a human can become the first Power - *far* beyond that needed for one guy to destroy the world. (Earliest estimate: 2025. Most realistic: 2040.) We're running close enough to the edge as it is. It is by no means certain that the AI Powers will be any more hostile or less friendly than the human ones. I really don't think we can afford to be choosy.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.