Eugene Leitl wrote:
> "Will be determined by the laws of physics" is a remarkably
> contentless statement. Everything's (apart from Divine Intervention,
> which most people here believe don't exist) determined by the laws of
> physics. So what?
> To make this somewhat less noise: I think rushing AI is at least as
> bad as rushing nano. Relying on best-case scenario (where the the
> first transcendee is deliberately holding back (*all*) the horses to
> allow everybody to go on the bus) is foolish at best. To begin, the
> perpetuator might be not human to start with.
> And say hello to oblivion,
OK, OK, I'll say it the long-winded way:
The determining factor on this issue is how hard AI and intelligence enhancement turn out to be. If intelligence is staggeringly complex, and requires opaque data structures that are generally inscrutable to beings of human intelligence, we get one kind of future (nanotech and human enhancement come online relatively slowly, AI creeps along at a modest rate, and enhanced humans have a good shot at staying in charge). If intelligence can be achieved using relatively comprehensible programming techniques, such that a sentient AI can understand its own operation, we get a very different kind of future (very fast AI progresss leading to a rapid Singularity, with essentially no chance for humanity to keep up). Either way, the kind of future we end up in has absolutely nothing to do with the decisions we make.
Personally, I feel that the first scenario is somewhat more likely than the second. However, I can't know for sure until we get a lot closer to actually having sentient AI. It therefore pays to take out some insurance by doing what I can to make sure that if we do end up in the second kind of future we won't screw things up. So far, the best way I can see to influence events is to try to end up being one of the people doing the work (although I'm working on being one of the funding sources, which could potentially be a better angle).
Billy Brown, MCSE+I