Re: Singularity: Generation gap

Eliezer S. Yudkowsky (sentience@pobox.com)
Sat, 27 Sep 1997 21:41:53 -0500


Eric Watt Forste wrote:
>
> > Omniscient FAPP means that you can visualize (and perfectly predict)
> > all other "agents" in a game-theoretic scenario. While multiple
> > competing OFAPP agents are questionable (halting problem?), a Power
> > deciding whether to force-upload humanity could well be OFAPP,
> > game-theoretically speaking.
>
> Thank you for the very clear and precise definition. I know
> that I tend to be more skeptical about scalar comparisons
> between the computational power of the kinds of Turing machines
> that we know how to make and the computational power of the
> kinds of Turing machines that we are than you do. If I
> swallowed Moravec's estimates of these things whole, I'd share
> your concern. But I don't know whether or not we know how to
> measure the computational power of chordate nervous systems in
> a way that we can compare directly to the computational power
> of our silicon abacuses.

It's also possible that our actions in ambiguous cases may be
quantum-unpredictable, some arbitrary function of which neurons randomly fire.
But, and I'd like to adjust my definition a bit in retrospect (so much for
"clear"), OFAPP should state that you can perfectly predict the
*probabilities* of all outcomes.

> There's research going on to implement sophisticated neural nets
> in silicon hardware, and that research route might lead to an
> omniscient FAPP entity someday even if we are fundamentally different
> from abacuses.

Especially if our decisions are not determined by random quantum collapses,
but by genuine and predictable rational reasoning.

> But a review of the taxonomy of neural net
> architectures (I usually distinguish between feedforward and
> recursive architectures, and make a second distinction depending
> on whether the learning algorithm uses feedback from the exterior
> environment or not) makes it clear that chordate nervous systems
> are both recursive in architecture and (to some extent or another)
> seek out and use reinforcing behavior from the environment in the
> learning algorithms. It seems to me that we have very little
> technical experience building and training nets that use *all* the
> architectural tricks, or in other words, building and training nets
> that even remotely resemble ourselves or even other real animals.

"Neural nets are not built in imitation of the human brain. They are built in
imitation of a worm's brain, and when we have neural nets down straight we'll
have a long way to go." (self-quote).

> While I agree with you in wanting to guard against failures of
> imagination, venturing real predictions in a field as new and
> inchoate as this one is folly. I consider Moravec's predictions
> to be an enjoyable form of play, but I don't let them keep me up
> at night.

You'll note that I said Powers could be OFAPP. I was just pointing out that
our ethical systems derive a great deal of their pattern from:

(1), the possibility that you are wrong no matter how sure you are of yourself
("The ends do not justify the means")

(2), the fact that someone else might know more than you do no matter how dumb
you think they are ("Respect the opinions of others")

(3), the Hofstadterian Prisoners-Dilemna resolution, that your decision
process is partially duplicated in others ("what if everyone else decides to
do the same thing?").

Note that *all* *three* break down under even an *approximation* to OFAPP.
For all I know, they break down under first-stage transhumanity, no Powers necessary.

Our ethical laws are a paradox. They are very "fragile" derivatives of human
nature, in the sense that a slight alteration in nature would produce a large
difference in result. (My definition of "slight" may differ from yours.) But
we think of them as absolute, because only an absolute injunction can overcome
our nature to break ethical rules for what seem like rational and altruistic reasons.

But Anders rational result-by-probability multipliers don't obey 1

Knowledge-rich Powers may not obey 2 (or at least may not see any reason to),
plus 2 is a derivative of one in the sense that you don't estimate the
*probability* that someone knows more than you do.

And perhaps only a slight increase in emotional sophistication is necessary to
void the partial illusion of 3. One who *knows* that others are not reasoning
the same way, and can guess the outcome with near-certainty, may "defect" in a
non-iterated PD or any non-iterated game. (Force-uploading is "defecting", in
a sense.)

Finally, an increase in *self*-knowledge - and boy, will that be easy to
program - voids a lot of the *strength* of ethical rules. Again, ours are
absolute only because our nature is to incorrectly ignore them, especially in
political issues. So even if all 3 remain, they may be voidable at will.

> But you may well know more neural-net theory than I do (because
> I'm guessing that you may well have more math than I do), so
> maybe I'll adjust my paranoia upward a notch or two. As the
> Bears song goes, "fear is never boring." ;)

As long as you refuse to act on it, there is no such thing as too much paranoia.

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.