den Otter wrote:
> > From: Eliezer S. Yudkowsky <firstname.lastname@example.org>
> > Sooner or later, some generation will face the choice
> > between Singularity and extinction. Why push it off, even if we
> > could? And besides, we might not die at all.
> But we can't rely on that, can we?
Depends on what you mean by "rely". If you mean, "Can we assume that the probability is 90%?", the answer is "No". If you mean, "Can we behave as if the probability is 90%?", the answer is "Yes". Our world is dynamically unstable, and is being acted on by powerful forces and positive feedbacks which serve to destabilize it further. Under the circumstances, the only decision we can make effectively is whether we'll die in nuclear war or nanowar, or whether a Singularity will occur. These are the two stable states, and the Universe is filled with stable things.
The forces inside a Singularity are powerful, complex, and far more dependent on external factors then the initial conditions. To the extent that initial conditions do have effect, they must use unstable forms of insanity to shield the AI from the external truth. In short, trying to control the Singularity would result in a world scoured bare and THEN Transcenscion. While I might find this outcome acceptable, I don't think anyone else would - and even from my perspective it's too dangerous; what if the Blight scours the Earth bare and then commits suicide?
> This isn't completely true. We *can* command the future if we finish
> the dash for SI in first place. If you lose yourself in defeatism you're
> already as good as dead.
I couldn't command the future even if I had the complete source code of a Macintosh-compatible seed AI in front of me right now. Choose between pre-existing possibilities, perhaps, but not add possibilities that weren't there before.
> The facts are simple:
> 1) at this time, only a handful of people really grasp the enormity
> of the coming changes, and (almost certainly) most of them are
> transhumanists (unfortunately, even in this select group many
> can't/won't understand the consequences of a Singularity, but
> that aside).
Sounds a bit elitist to me. Are you sure *you're* one of the Chosen?
Seriously, my perspective doesn't really allow for dividing humanity into groups like that. Where thinking is concerned, you've got rocks, mortals, and Post-Singularity Entities. Sorting mortals by intelligence is as silly as separating rocks or PSEs.
> 2) This gives us a *huge* edge, further increased by the high
> concentration of scientific/technological talent in the >H community.
No, it doesn't. We ain't got no money. We ain't got no power. _Zyvex_ might be said to have an edge because it's doing work in nanotechnology. The MIT labs might be said to have an edge. If you're really generous, I could be said to have an edge because of "Coding A Transhuman AI" or "Algernon's Law". What I'm trying to convey is that each individual has ver own "edge". Not only that, but I think that if all the Extropians worked together it would simply slow things down.
> 3) If we start preparing now, by keeping a close eye on the
> the development of technologies that could play a major part
> in the Singularity (nanotech, AI, implants, intelligence
> augmentation, human-machine interfaces in general etc.)
> and by aquiring wealth (by any effective means) to set up a
> SI research facility, then we have a real chance of success.
Go ahead. Don't let me stop you.
> Note: the goal should (obviously) be creating SI by "uplifting"
> (gradually uploading and augmenting) humans, *not* from AIs.
Too damn slow. Turnaround time on neurological enhancement is probably at least a decade and probably more - for real effectiveness, you have to start in infancy. I can't rely on the world surviving that long.
Also, I trust humans even less than I trust AIs.
Also, the first-stage neurological transhumans will just create AIs, since it's easier to design a new mind than untangle an evolved one.
> > To the
> > day when it is said, in the tradition of Oppenheimer who looked upon
> > another sun: "I am become Singularity, Breaker of Dreams."
> "Now I've become death, the destroyer of worlds."- The First SI?
Not really. Poetically speaking, the idea is that a scientist or hacker on the verge of a powerful new technology briefly takes on the Aspect of whatever it is he has created. (This is hypothesized as a fluctuation in the social emotions and causal attribution, not a literally real event.) The analogy was between the Aspects of two great changes, not between two forms of death.
-- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.