Yes you do; you stated in the original post, regarding Eliezer's AI-will-save-us scenario, that "The notion sounded absurd to me at first". There you have it, that was your common sense talking, your rational instinct for selfpreservation. Only later did your vision get blurred by Eliezer's relentless smooth talk, which apparently manages to obscure the simple fact that he cares about the Singularity and the Singularity alone. When it really comes down to it you are expendable to him. He'll "leave you suckers to burn", as he so eloquently put it.
> > If nanotech comes first, on the other hand, we *will*
> > have a fighting chance, certainly if we start planning
> > a space program (as mentioned in the "SPACE:
> > How hard IS it to get off Earth?" thread)
> But, did we not pretty much agree that a space program
> may not help, for replicating nanoassemblers might easily
> be blasted loose from the surface of a planet by a stray
Hmmm, would it really be that easy to escape earth's gravity well & atmosphere? It would take one helluva meteorite to do that, IMO, and those are rather rare. Even then the nanites would just drift around, not really doing much. In the meantime we'd be working hard on uploading etc.in our Martian (or whatever) base.
> never mind intentional spewing forth of tiny
> amounts of goo. Nanobots travel better than we do.
Nanites are only (truly) dangerous when controlled by an intelligence. As long as that intelligence is merely human, we have a fighting chance. Fight nanites with nanites, incinerate them with nukes, take out their operators, run from them at the highest achievable % of lightspeed, fool their sensors (with infiltrator nanites, for example) etc. A reasonably fair fight. When fighting an ASI with nano (and JHWH knows what other) powers, on the other hand, the odds are *badly* against you.
> Eliezer, what is your notion?
Better ask some "neutral" third party (he's rather biased, you know, and of course so am I).