Re: Posthuman mind control (was RE: FAQ Additions)

Eliezer S. Yudkowsky (sentience@pobox.com)
Tue, 02 Mar 1999 01:01:06 -0600

Nick Bostrom wrote:
>
> My answer to this is a superposition of three points: (1) I
> explicitly allowed that fundamental values could change, only (except
> for the mind-scan) the change wouldn't be rationally brought about.
> For example, at puberty peoples' values may change, but it's not
> because of a rational choice they made.

Are you arguing that, say, someone who was brought up as a New Age believer and switches to being an agnostic is not making a rational choice?

> (2) Just because somebody
> calls a certain value fundamental doesn't mean it actually is
> fundamental.

So fundamental values are... whatever values don't change? Please clarify by defining the cognitive elements constituting "fundamental values". I make the following assertion: "There are no cognitive elements which are both invariant and given the highest priority in choosing goals."

Another question: What are *your* "fundamental values" and at what age did you discover them?

> (3) With
> imperfectly rational beings (such as humans) their might be conflicts
> between what they think are their fundamental values. When they
> discover that that is the case, they have to redefine their
> fundamental values as the preferred weighted sum of the conflicting
> values (which thereby turned out not to be truely fundamental after
> all).

Why wouldn't this happen to one of your AIs?

> "Do what is right" sounds almost like "Do what is the best thing to
> do", which is entirely vacuous.

I wouldn't try to bring about a Singularity if I thought it was wrong. Thus, "do what is right" is a goal of higher priority than "bring about a Singularity". If something else were more probably right, I would do that instead. It is thus seen that "bring about a Singularity" is not my fundamental goal.

> I suspect there would be many humans who would do exactly that. Even
> if none did, such a mindset could still evolve if there were
> heritable variation.

Guess I'll have to create my AI first, then, and early enough that nobody can afford to have it reproduce.

> I'm glad to hear that. But do you hold the same if we flip the
> inequality sign? I don't want to be wiped out by >Hs either.

I do not hold the same if we flip the inequality sign. I am content to let transhumans make their own judgements. They would not, however, have any claim to my help in doing so; if they need my help, they're not transhumans. I would, in fact, actively oppose ANY attempt to wipe out humanity; any entity with enough intelligence to do so safely (i.e. without the chance of it being a horrible mistake) should be able to do so in spite of my opposition.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.