Re: Superintelligences' motivation
Robin Hanson wrote:
>Another alternative is for the system to prefer stability
>in one of the areas under its control. It wants to do what
>you say, and it wants to continue to want this.
Right. This was what I meant when I say that the
pleasure/pain principle could play a part of its motivation
system, e.g. if that system included the value that it is
bad to mess with your own motivation system. It was
misleading of me to carchterize this as an *external* value,
though. It rather belongs to category 1b on the following
list, and should be called "internal":
1a Internal states. E.g. Pleasure; believing a theorem
1b Internal sequences. E. g. Pleasure stemming from a
virtuous act; Believing a certain mathematical theorem, the
belief having emerged as a result of a process that involves
ascertaining the validity of the theorem.
2a External states. E.g. Complexity in the world.
2b External sequences. E. g. Complexity in the world,
created by natural evolution without human intervention.
Nicholas Bostrom firstname.lastname@example.org