> The determining factor on this issue is how hard AI and intelligence
> enhancement turn out to be. If intelligence is staggeringly complex, and
> requires opaque data structures that are generally inscrutable to beings
> human intelligence, we get one kind of future (nanotech and human
> enhancement come online relatively slowly, AI creeps along at a modest
> and enhanced humans have a good shot at staying in charge).
Are you saying that in this scenario, there is no real AI? That may be the case, which would be a sad thing in my opinion.
If you are saying that AI is possible but (*really*) hard, you still have the situation that when just more than human-level intelligence is reached in AIs, it has been built by humans. Thus the AI is at more competent than the people who created it, and so is competent, one would suppose, to take the work further, unless human intelligence is the absolute pinnacle of intelligence (maybe this is true), or there is some huge qualitative discontinuity between human intelligence and the next higher stable form of intelligence.
It's hard to imagine that, given human equivalent AI, it could be prohibitively difficult to make a more intelligent being. Intuitively, throwing in fast enough hardware should be equivalent to a qualitative leap in intelligence, given a big enough difference between normal speed and fast (say 1000 times for an absolute guess?)
> If intelligence
> can be achieved using relatively comprehensible programming techniques,
> that a sentient AI can understand its own operation, we get a very
> kind of future (very fast AI progresss leading to a rapid Singularity,
> essentially no chance for humanity to keep up). Either way, the kind of
> future we end up in has absolutely nothing to do with the decisions we
Just some quibbles (yeah, I'll be first against the wall when the singularity comes - Hey Mr AI, you've got something hanging out of your nose, ha ha, hey, who turned out the lights, ACKCKK (gurgle))