Why Not a Planet Of The Apes?

Robin Hanson (hanson@hss.caltech.edu)
Mon, 9 Jun 1997 10:47:23 -0700 (PDT)

Nick Nicholas
>Its our society and the scarcity and cost of chimps, virtually all
>of the great apes are at some point between endangered and extinct.

But we seem to have large numbers of fast breeding monkeys for lab
experiments. Couldn't we breed more of these fast if we had
substantial other uses for them?

John K Clark writes:
>It's difficult and expensive to program a monkey to perform a task, and when
>you're finished all you have is one trained monkey. It might be just as
>difficult to train a primitive AI to do the task, but when you're finished
>you have millions of trained AIs.

This applies if the limiting step is training them for each task. But
from the discussion here it seems that the problem is really
domesticating them enough so they choose to do the tasks given them.
So the puzzle is why we haven't been able to domesticate them yet.

>While aggression, dominance games, and shirking work would probably
>help a monkey's reproduction, it's going to be counterproductive for
>an AI, at least as long as AIs depend on humans for reproduction.

This behavior may have helped monkeys in the past, but it is counter
productive now. So will it take AIs a similarly long time to figure
out what behavior is in their interest?

>Long domestication tends to make animals more docile (e.g. dogs vs.
>wolves) and AIs will presumably start out docile, from both design
>goals and (presumably) little need for self-motivation in the early

I think you're arguing that docility is irrelevant to increasing
intelligence. But maybe self-motivation is important to the process.

Anders Sandberg writes:
>The main problem with using other lifeforms for various tasks is that
>they are usually evolved for something completely different. It is
>better to evolve life that fits the task or change the task so it
>fits the life than forcing life to adapt to the task.

This seems to go against the idea of general intelligent agents, able
to quickly adapt themselves to different tasks.

OK, given all this intelligent comment, I can reframe the puzzle. Why
is it that it seems to most people more promising to build up smart
cooperative agents from scratch, as in an AI approach, than to
domesticate existing very smart but not cooperative-enough agents? Is
domestication really that hard compared to learning how to organize
intelligence and acquiring all that common sense knowledge?

Sure some tasks have physical environments and timescales requiring
human-made computers. But this doesn't apply to most tasks now doen
by humans.

Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/