Nick Bostrom wrote:
> [...] Drexler has sketched out a simple
> mechanical nanocomputer the size of a sugar cube
> with a processing power one million times that
> of a human brain.
>
> [...] Hence we can run the best human brains at
> a speed where after one day they will have
> experienced more than two thousand subjective
> years each.
This ignores the (apparently) important role of chaos in the brain. Computers, as we all know, have a difficult time simulating chaotic systems so uploading to a conventional computer would be foolish. (And the faster you run the simulation, the more foolish you look.)
Obviously this could be a problem with regular Artificial Intelligence as well. It could be that to have a complex adaptive intelligence you need an analog architecture.
> During this time they will almost certainly have
> managed to design more effective architectures
> such as electronic nanocomputers and quantum
> computers.
An interesting question would be how many uploads you would need. I would hazard a guess that a lone human, given all the time in the world, couldn't recreate the achievements of a human society. Perhaps how we communicate and what we create is just as important as how we think. This can just as easily be applied to artificial intelligence.
> Since the uploads can control a molecular lab at
> molecular speeds, they are not slowed down by
> having to rely on humans to carry out trial-and-
> error testing.
Of course, if an upload (or AI) did require a radically different architecture it might not conform to the same scaling laws as conventional computing. Timing may well be all-important, and the upload (or AI) might have to stay at the same speed as the rest of us.
> It is perfectly possible that the singularity
> happens before uploading takes place, but this
> argument shows it will happen no later.
It is perfectly possible that the singularity is a method of not having to go into deep historical context when writing science fiction novels.
BM