"D.den Otter" wrote:
> No, no, NO!
Join us, Spike... don't be afraid...
> For god's sake man, trust your instincts
> on this one. If an AI singularity comes first, and the
> AI *is* fully rational, there is a *major* chance that it
> will kill us all. I'm pretty sure it's 99.something %
Let's apply the Bayesian Probability Theorem. If, speaking about an event on which there is great controversy and no access to immediate evidence, Human A says vis theory is 70% likely to be right and Human B says vis theory is 99% likely to be right, which one is more likely to be totally wrong?
Trust me: I don't think I'm infallible.
> If nanotech comes first, on the other hand, we *will*
> have a fighting chance, certainly if we start planning
> a space program (as mentioned in the "SPACE:
> How hard IS it to get off Earth?" thread)
> and related projects (see my Nov. 14 post in the
> above thread for some suggestions) ASAP.
Maybe. But if humanity survives nanotech, sooner or later it'll still come face-to-face with a greater-than-human intelligence. As you admit.
> [Wacky rant on "why subjective morality is the objective truth" deleted.]
> And yes, there is a "better way", and it's called
> synchronized uploading.
I wouldn't trust den Otter on this one, Spike. Long before he started talking about "synchronized uploading" he believed that only one Power could survive, and he plans to be it. He'll take your money, and while you're waiting for your journey into the promised land, he'll jump into the "experimental prototype hardware" and leave you suckers to burn.
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way