Re: Genius dogs

Hal Finney (
Wed, 8 Oct 1997 22:40:18 -0700

Nicholas Bostrom writes:
> I'm not sure that this specification of an infinitely fast computer
> is logically consistent. Suppose we combine it with a
> present day PC (or a simulated PC on the infinitely fast computer).
> Input any Turing machine descriptionand a tape state to the PC. The
> PC then sends this info to the infinitely fast computer, which
> simulates the computation that the specified Turing machine performs
> if started on the specified tape state. If the Turing machie halts,
> the the infinitely fast computer stops and sends the messege "Halted"
> to the PC; if the Turing machine doesn't halt then the infinitely
> fast computer you specified continues to run indefinitely. So the
> combined system can then solve the halting problem in the following
> way: the PC starts the simulation on the infinitely fast computer,
> waits one second, and if it has got the messege "Halted" then it
> writes "Halts", and if it hasn't got the messege "Halted" then it
> writes "Doesn't halt".

Interesting, although of course we all know that the infinitely fast
computer doesn't exist anyway. The point of the thought experiment,
similar to your own, is to focus on the issue of how hard it would be
logically to come up with a method to create intelligence.

I seem to recall a claim that Eric Drexler had an idea for creating
synthetic intelligence by means of an evolution simulation. You'd set
up a simulation of something similar to our own laws of physics, give
it initial conditions like the primordial earth, and just wait while
life and then intelligence evolve. Supposedly he had calculated that
a practical quantity of nanocomputers could re-create the entire
evolution of life on earth in a reasonable amount of time. Then maybe
you run it for a few more minutes and get super-human intelligence.

(Of course what would really happen is, once the AIs reach human level
intelligence, they'll stop getting smarter and instead create their own
nanotech simulations to evolve their own super-smart AIs. But those
simulated AIs won't get beyond human level either, they start up a
simulation, etc., etc.)

I haven't been able to find the exact description of Drexler's idea,
but I think it was on this list sometime in the last few years. Does
anybody remember this?