Re: NANO: Custom molecules (gulp!)

Eliezer S. Yudkowsky (sentience@pobox.com)
Sun, 28 Nov 1999 14:15:42 -0600

"D.den Otter" wrote:
>
> Yes you do; you stated in the original post, regarding
> Eliezer's AI-will-save-us scenario, that "The notion sounded
> absurd to me at first". There you have it, that was your
> common sense talking, your rational instinct for self-
> preservation. Only later did your vision get blurred by
> Eliezer's relentless smooth talk, which apparently
> manages to obscure the simple fact that he cares
> about the Singularity and the Singularity alone. When
> it really comes down to it you are expendable to him.
> He'll "leave you suckers to burn", as he so eloquently
> put it.

What it really comes down to is that I don't consider myself competent to judge others as being "expendable" - or inexpendable. I leave such matters to SIs. I would like, personally, for as much of humanity to go through the Singularity as possible, both because I'm sentimentally fond of the human race and because we might turn out to be valuable.

If the SIs decide to exterminate you, Spike, I will no more object than I would object if they decided to exterminate me. I'm not responsible for the behavior of beings that are smarter than I am. First of all, I don't know what they're going to do, so my creation of SIs doesn't imply approval of any specific action. And if I did know, I would still refuse to be held responsible. The smartest being in the chain of causality is the one with final responsibility. If the SIs go haywire and do something dumb, then I, as the "smartest being" in the chain, will be at fault for the outcome. But I will not and cannot take responsibility for the sane decisions of a better mind. In my system of moral philosophy, I am not the center of the Universe.

But don't let my own peculiar moral philosophy obscure the fact that building an AI is still the best thing from a selfish perspective. You know you can't trust den Otter. Would you trust a being if you could read the source code? You know how long it'll take to get uploading working. Don't you think AI has a better chance of actually showing up before you personally, if not all humans, get eaten by goo?

Look at it this way. You have a choice of who to trust. You can trust den Otter, who is absolutely certain that personal survival is the only rational goal and that all others are dirt to be ground beneath his heel if he can get away with it. Or you can trust an AI with open source code and no human selfishness. Take your pick.

Of course, I'm being a bit hypothetical here. Personally, I find it highly doubtful that den Otter could keep his insane philosophy under the pressure of intelligence enhancement. I wouldn't stop him if he were climbing into an uploading kiosk right now; I'd help him in and press the button. A Singularity's a Singularity. The general issue I want settled is not selfishness versus altruism, or uploading versus AI: I want it established that with other threats arriving, we can't afford to be picky about what kind of a Singularity we take, much less be picky about who gets to be first.

den Otter legitimately points out that this result may derive from my own commitment to a Singularity, any Singularity, on nearly first principles. So it does. But that doesn't mean that selfish logic can't arrive at the same conclusions. Remember, aside from my fondness for humanity and my own curiosity, I don't care if the Singularity arrives in five years or five thousand. den Otter and I both agree that a Singularity of some sort is inevitable, if humanity survives. I'm in a hurry because I think that humanity has a good chance of being wiped out by nanowar. den Otter is in a hurry because he thinks the first superintelligence(s) will act to wipe out everyone else. Maybe it's a good thing we don't share each other's world-models, because that would be depressing.

> Better ask some "neutral" third party (he's rather biased, you
> know, and of course so am I).

What "neutral" third parties? Is there anyone here who knows enough to have an opinion and doesn't have one?

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way