Re: FAQ Additions (Posthuman mind control)

Michael S. Lorrey (retroman@together.net)
Tue, 23 Feb 1999 19:12:54 -0500

"Eliezer S. Yudkowsky" wrote:

> Nick Bostrom wrote:
> >
> > Two points: First, a being who has a certain fundamental value
> > *doesn't want to change it*, per definition. So it's not as if these
> > guys will think they are being mind-fucked and try to figure out a
> > way to get around it. No more than you are trying to abolish your own
> > survival instinct just because you know that it is an artifact of our
> > evolutionary past.
>
> You are *wrong*. Morality is not *known* to be arbitrary, and that
> means the probabilistic landscape of desirability isn't flat. I *am*
> trying to abolish my survival instinct, because I know that it's an
> artifact of my evolutionary past, and is therefore - statistically
> speaking - highly unlikely to match up with the right thing to do (if
> there is one), a criterion which is totally independent of what my
> ancestors did to reproduce. Remember, every human being on this planet
> is the product of a successful rape, somewhere down the line.

OH puhleeze. So just cause one of my knuckle dragging ancestors happened to press the issue means that I have to throw out all of the incredibly smart, difficult to learn, near instantaneous responses that all of my ancestors developed in order to suvive and eventually make me? Give it up. Lets give back North America while we are at it, oh, and since we came out of africa, lets give the entire world back to the animals.... Get real.

You are also extremely wrong. Statistically speaking, most of the time your instinctive response is EXACTLY the thing to do. The only time there may be a conflict is in cases where technology has been designed to be counter-intuitive. That is a problem of bad engineering, not bad evolution.

> Your posthumans will find their own goals. In any formal goal system
> that uses first-order probabilistic logic, there are lines of logic that
> will crank them out, totally independent of what goals they start with.

How many other people or AI's will they destroy while they relearn from scratch what it took us a LONG time and a LOT of lives to learn?

>
> I'm not talking theory; I'm talking a specific formal result I've
> produced by manipulating a formal system. I will happily concede that
> the *truth* may be that all goals are equally valid, but unless your
> posthumans are *certain* of that, they will manipulate the probabilistic
> differentials into concrete goals.
>
> It's like a heat engine. Choices are powered by differential
> desirabilities. If you think the real, factual landscape is flat, you
> can impose a set of arbitrary (or even inconsistent) choices without
> objection. But we don't *know* what the real landscape is, and the
> probabilistic landscape *isn't flat*. The qualia of joy have a higher
> probability of being "good" than the qualia of pain. Higher
> intelligence is more likely to lead to an optimal future.

It is also as likely to produce high levels of failure along the way if they refuse to learn, or we refuse to teach them from our past.

> When you impose a set of initial goals, you are either assuming that the
> landscape is known with absolute certainty to be flat (an artificial and
> untrue certainty) or you are imposing a probabilistic (and thus
> falsifiable) landscape.

Not necessarily. It is providing a user a bit of preceding knowledge about the space, which may or may not be entirely accurate. It may influence their actions, yes, but it will also help prevent potential catastrophes.

> Can we at least agree that you won't hedge the initial goals with
> forty-seven coercions, or put in any safeguards against changing the
> goals? After all, if you're right, it won't make a difference.

If I'm right it will make every difference. Imagine thousands or millions of transhuman Tim McVeighs 'experimenting' on the world while they relearn everything that we already know.....

Mike