Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Sep 29 2000 - 01:45:27 MDT


J. R. Molloy writes:
>
> I feel more affinity toward AI (which we should remember does not actually
> exist) than I have ever been able to relate to the nine tenths of humanity which
> remain immured in belief systems.
 
Now that is pretty harsh. Particularly considering the fact that
everybody capable of thought is immured in a belief system. Say, how
about a nice cask of amontillado?
 
> > The first AI which has nucleated in the
> > network will copy (sexually) mutated copies of itself all over the
> > place in a wormlike fashion.
>
> Now who's anthropomorphizing? How do you know what AIs will do? It seems as

Is that your understanding of humanity? I was actually thinking about
an inoculated Petri dish when I was writing this. The lowest common
denominator of life: make mutated copies of yourself.

> likely as not to me, that AIs will steer clear of sexuality for the very
> sensible reason that it clouds ratiocination and obscures common sense.
 
Groan. We're on very different buses here, again. Let's see, we have a
cracker (or a psychotic researcher) writing an evolving Net worm, or a
piece of Net infrastructure spontaneusly getting frisky and zooming
off, because it has fallen into an undesigned state, suffered a bit
mutation, or is being operated in an unforeseen context.

Assuming this, a system which doesn't propagate gets selected out very
early. If it will start mutating itself on a single copy it is going
to screw itself up sooner or later if it doesn't copy itself all over
the place. Because you can't predict what impact a given change will
have. Nonviable mutation frees the previously occupied slot by virtue
of reduced fitness of the occupant relatively to its surrounding, to
be reseeded from the population of viables. A population of critters
in a limited resource theater is solving an optimization task. And
hence doing a lot more than an isolated critter, or even a population
of isolated critters. A lot of mutation is driven by recombination
alone, so you will see fitness increasing even if you switch off
mutation yet allow pairs of individua to recombine their genome (this
is more than just a poor man's PRNG). This is what that sex thing is
about.

Rationalization and common sense? These are extremely high-level
concepts, which are not immediately translatable to AI. I think
"rationalization" assumes a flawed, since dualized self model. You're
mostly talking about artifacts of introspection, instead of the real
thing. Like talking about the surface of the lake as the antithesis of
the rest of the lake.

As to common sense, I presume this means street smarts. Ability to
make the right decisions rapidly in face of incomplete and/or
conflicting data. Darwinian systems are known to be able to handle
that very nicely, why, they've grown up in the street.

The notion that AIs are going to be crystal clear citadels of pure
thought must appear ludicrous. Because that notion does not make any
sense in an evolutionary theatre.

> > Because the copy process is much faster
> > than adding new nodes (even if you have nanotechnology) you have
> > instant resource scarcity and hence competition for limited
> > resources. Those individua with lesser fitness will have to go to the
> > great bit bucket in the sky.
>
> So AI individua will be *very* friendly toward each other. The question then is,

Huh? Your logic seems to be working on very different principles from
mine. I just told you that AIs will have to compete and die just as we
do, and you say "they will be very friendly to each other". Remind me
to never become your friend, will you?

> "How far would AI extend its friendliness? Would it extend to you and me?"
> Perhaps it would. The friendliness of religious fanatics definately does not.
 
I wonder where you went now, I wish I could follow.

> Yes, we don't ever want to appear unfriendly to something more intelligent than
> ourselves. But why does friendliness come into it at all? I mean, have you ever

We don't want to appear unfriendly to something powerful, yes. Because
then it will feel compelled to come and kick our butts. Perhaps it
won't do that if we just lie low.

> thought that truth may have value greater than friendship? If our friends all

Value in which particular value system?

> become deranged as a result of some weird virus that makes them politicized
> zombies, perhaps we ought to place our trust in some artificial intelligence

It is impossible to achieve quantitative infectivity on a genetically
diverse populace with a given pathogen without full knowledge about
the diff list.

> which remains impervious to such an attack, some AI which remains sane and
> balanced. Shall we trust the natural intelligence of Hitler and Stalin more than

What makes you think an AI will remain sane and balanced? Clearly it can't.

> the robot intelligence of our own device?
 
I don't know what you're smoking, but I wish I had some of it, it
seems to be powerful stuff.

> "These technologies are just too much for a species at our level of
> sophistication."
> --Bill Joy

Gee, what a deep statement. Thanks, Billie-Joy. Nevermind that:

1) due to nature of these technologies sustainable relinquishment
   doesn't work, and some of the countermeasures make the original
   problem set pale in comparison

2) these technologies are necessary to move on to the next levels of
   sophistication

3) if don't do that, we're screwed on the long run, anyway



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:22 MDT