Billy Brown writes:
> Eli isn't the only one. I figure that whether or not this scenario happens
> wil be determined by the laws of physics, so I'm not worried about causing a
> disaster that would not otherwise have occured. I am, however, very
> concerned about the potential for a future in which AI turns out to be easy,
> and the first example is built by some misguided band of Asimov-law
> enthusiasts.
To make this somewhat less noise: I think rushing AI is at least as bad as rushing nano. Relying on best-case scenario (where the the first transcendee is deliberately holding back (*all*) the horses to allow everybody to go on the bus) is foolish at best. To begin, the perpetuator might be not human to start with.
And say hello to oblivion,
'gene