Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Sep 06 2000 - 02:22:12 MDT


J. R. Molloy writes:
> Why would AIs want to be friendly?

"Friendliness" (prevalence of cooperative strategies over defecting)
emerges in iterated (playing multiple rounds) interactions of agents,
provided they can measurably profit from such interactions (the total
is greater than the sum of it's parts = i.e. the iterated interaction
is not a zero sum game) and can identify the agents they've dealt in
the past. That's the stage.

Iterated, ok. Identification, ok. Non-zero sum game, ok. Now notice
that above "cooperative" does not involve humans scared out of their
wits, scrabbling to contain a global evolving worm effortlessly
permeating their firewalls, nor dealing with a SI, because the latter
does not profit from interactions with humankind measurably,
similiarly to us not interacting profitably with an ant colony under a
tree three streets away from our villa. So, we're either hostile, or
insignificant. These are not good odds.

Primitive (say, doglike) intelligences, particularly primitive
intelligences far away from the computational substrate they're
running on, are probably containable. Advanced (usable) intelligences
are principally uncontainable, because their very unpredictability
(orelse a simple algorithm could do their work) and potential
open-endedness makes them so useful.

I apologize for telling the same thing over and over again, but
apparently a few people have not yet heard these old beaten-up
arguments.

> Because if cognitive scientists can make one AI, they can make millions
> (billions) of them, simply by copying them. When developers have sufficient

Right, and they will, because the darn things are so useful. An AI
good enough to do the markets would be worth a fortune on the free
markets, even not mentioning something which could run a factory, or
drive a car safely [insert your favourite product here]

> numbers of intelligent agents, they simply let the IAs compete for the right to
> reproduce. These evolvable agents then do their own genetic programming. The

Nothing new, that's how you've made the darn things in the first place.

> friendliest AIs get to reproduce, the rest get terminated. The socialization of

So you've got a billion AIs gyring and gimbling in their wabe, and how
exactly do you intend to supervise them, and guarantee that their
behaviour will be certifyable not hostile in all possible situations?

If you know how to do it, please tell me, because I have not a ghost
of an idea how to do it practically.

> AIs would be a snap compared to the socialization of human children.
> By the time the AIs evolve to above-human-intelligence, they would be far more
> trustworthy than any human, due to many generations of culling the herd of AIs.

You never got bitten by a dog? A maltreated dog? A crazied, sick dog?
Never, ever?

> Think of AI as a huge population of intelligent agents rather than as a single
> entity, and the problem of making them friendly disappears. All you have to do

How strange, I thought it was the other way round. Statistical
properties, emergent properties, all things which you don't have to
deal with when all you have is a single specimen.

Of course, there is no such thing as a single specimen, unless the
thing instantly falls into a positive autofeedback loop -- same thing
happened when the first autocatalytic set nucleated in the prebiotic
ursoup. But then you're dealing with a jack-in-the-box SI, and all is
moot, anyway.

> is discard any artificially intelligent individuals which show symptoms of
> unfriendliness, and you end up with very friendly, docile, and helpful agents
> all very intent on breeding themselves into friendly, docile, and helpful SIs.
 
Yeah, *right*.

> AFAIK, Asimov never considered the possibility of genetic programming and
> evolvable machines which could compete against each other to reach higher levels
> of IQ. With thousands (or millions and billions) of artificially intelligent

Asimov was full of shit. Read Vinge, and always remember that he's a
single human science fiction author, not a biblical prophet.

> agents battling each other to reproduce, all humans would need to do is to cull
> the herd, so to speak. Any unfriendly AI agents (unlike human children) could
> simply be terminated. This would result in a population of AIs with docility and
> compliance as part of their genetic code. Moravec's Mind Children could

It is exactly prevalence of such profoundly unreasonable, utterly
devoid from basic common sense expectations that makes me consider
breeding an AI a highly dangerous business. Because there is such a
demand for the things, and because so many teams are working on them,
someone will finally succeed. What will happen then is essentially
unknowable, and most likely irreversible, and that's why we should be
very very careful about when, how and the context we do it in.

> obviously number in the millions from the start, because as soon as one is
> developed, it could be duplicated ad infinitum. With an unlimited supply of
> genetically programmed AIs, their evolution could be guided and directed as
> experimenters see fit. The socialization of AI would consequently be far easier
> than the socialization of humans.

Socialize a god.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:17 MDT