Superintelligences' motivation
Robin Hanson (hanson@hss.caltech.edu)
Mon, 3 Feb 1997 13:31:10 -0800 (PST)
N.BOSTROM@lse.ac.uk writes:
>... call such a system autopotent: it hascomplete power over and
>knowledge of itself. ... Suppose we tried to operate such a system
>on the pain/pleasure principle. ... It would simply turn on the
>pleasure directly. ... We may thus begin to wonder whether an
>autopotent system could be made to function at all; perhaps it would
>be unstable? The solution seems to be to substitute an external
>ultimate goal for the internal ultimate goal of pleasure.
Another alternative is for the system to prefer stability in one of
the areas under its control. It wants to do what you say, and wants
to continue to want this. This idea is pursued by Minksy in his
Soceity of Mind book.
Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/