Re: Genius dogs

Nicholas Bostrom (
Mon, 6 Oct 1997 22:00:33 +0000

Let me first say that I agree that it would be an entirely useless
enterprise to implement the dog Turing machine. I also agree that, as
in Searle's Chinese room experiment, it would only be the whole
system, not the individual dogs, that would be superintelligent. I
furthermore agree that the whole message was rather silly, but I
posted it because I thought it would elicit some comments that might
spark an interesting discussion.

Hal Finney wrote:
> Nicholas Bostrom writes:
> > If we take a human brain and simply speed it up enough, will it be a
> > superintelligence? Would a dog brain be?
> We had some debate on this issue before, in June, 1996, in the more
> conventional terms of whether the insights of a genius could ever be
> achieved by a normal (or somewhat subnormal) person, given enough time.
> I argued that they could not, that no matter how long or how hard an
> average person thought about the problem, they would not come up with
> the theories of general relativity or quantum mechanics.
> A possible test for this I proposed was to take some hard problems from
> the Mensa tests and give an average guy unlimited time to try to solve
> them. Might be hard to prevent cheating, though.

Interesting. What is your opinion on the following hypothesis?

(H) Take a person X of normal intelligence who knows the basics of
some standard programming language. Give him an arbitrarily powerful
computer, complete with camera eyes, microphones, robot arms etc.
Then it is possible to educate X in less than a week in such a way
that he will be able to program his computer to achieve

The idea is that what one may call a universal intelligence algorithm
might well be very simple. For example, if one knew enough about the
initial conditions on the earth 4 billion years ago, one might be
able to simulate evolution and thereby at least human genius level
intelligence. The same could probably be done with a much leaner
information base. It doesn't even seem implausible to me that some
genetic algorithm or neural network architecture/learning rule that
might be so simple that it could be written on the back on an
envelope could achive superintelligence, given enough hardware and
unlimited interaction with the external world. I can say something
even stronger: I think it quite possible that a universal
intelligence algorithm could be fairly easily discovered. Perhaps it
could be done in a few months by a smart guy, perhaps in an
afternoon. The reason nobody (as far as I know) has yet done this
discovery is that the algorithm would be extremely inefficient and
therefore practically useless. But it might still be fun from a
theoretical point of view. Perhaps I will have a go at it myself.

> The question I wonder about is, if the genii we talked about a few days
> ago granted the (misguided?) wish to speed up mentality a million-fold,
> would the resulting person be a super-intelligence simply in terms of
> applying his own native reasoning powers to the problems he faced.

Given sufficient motivation, disregarding the problem of sensory
interaction (that would probably cause the person to go made within a
second), and assuming he had been taught one universal intelligence
algorithm, then the answer would be Yes.

Nicholas Bostrom