Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Fri Sep 29 2000 - 16:00:12 MDT


Eugene Leitl writes,

> Now that is pretty harsh. Particularly considering the fact that
> everybody capable of thought is immured in a belief system. Say, how
> about a nice cask of amontillado?

I don't believe that everybody capable of thought is immured in a belief system.
Perhaps you haven't discovered thoughtful people who have freed themselves from
belief, but that doesn't mean they don't exist. One example that I know is Jiddu
Krishnamurti. He believed (in the sense that I have used the word) nothing at
all.
http://www.kfa.org

> Is that your understanding of humanity? I was actually thinking about
> an inoculated Petri dish when I was writing this. The lowest common
> denominator of life: make mutated copies of yourself.

Then we can at least assign this as an attribute of AI without
anthropomorphizing?
Could one say, "If I were in the AIs' shoes, I'd reproduce myself like crazy."?

<I concur with what you write about reproduction>

> As to common sense, I presume this means street smarts. Ability to
> make the right decisions rapidly in face of incomplete and/or
> conflicting data. Darwinian systems are known to be able to handle
> that very nicely, why, they've grown up in the street.

I think Darwinian systems would do even better among machines.

> The notion that AIs are going to be crystal clear citadels of pure
> thought must appear ludicrous. Because that notion does not make any
> sense in an evolutionary theatre.

Right. Evolution is not always rational. Sometimes it's more intelligent to be
a-rational (that is, supersede rationalism altogether).

> > > Because the copy process is much faster
> > > than adding new nodes (even if you have nanotechnology) you have
> > > instant resource scarcity and hence competition for limited
> > > resources. Those individua with lesser fitness will have to go to the
> > > great bit bucket in the sky.
> >
> > So AI individua will be *very* friendly toward each other. The question
then is,
>
> Huh? Your logic seems to be working on very different principles from
> mine. I just told you that AIs will have to compete and die just as we
> do, and you say "they will be very friendly to each other". Remind me
> to never become your friend, will you?

I was looking at AI reproduction in terms of sexual reproduction. AIs could have
sex couldn't they? Sure they could and would compete. They'd compete for mates,
just as we do. So, just like us, they'd get *very* friendly as sexual partners,
while remaining competitive at large. So, in the sense of becoming friends to
reproduce... well, I probably don't need to remind you never to get friendly
that way.

> > "How far would AI extend its friendliness? Would it extend to you and me?"
> > Perhaps it would. The friendliness of religious fanatics definately does
not.
>
> I wonder where you went now, I wish I could follow.

No you don't. You only want to pretend that you don't understand that AI would
have as much right as you or I to choose its friends. Religious fanatics are
defective. Learn it, love it, live it.

> > Yes, we don't ever want to appear unfriendly to something more intelligent
than
> > ourselves. But why does friendliness come into it at all? I mean, have you
ever
>
> We don't want to appear unfriendly to something powerful, yes. Because
> then it will feel compelled to come and kick our butts. Perhaps it
> won't do that if we just lie low.

Fat chance. An actual >H AI (which would of course soon become even more >H)
would seek and destroy the lowest lying perps first.

> > thought that truth may have value greater than friendship? If our friends
all
>
> Value in which particular value system?

Value, n. The desirability or worth of a thing; intrinsic worth; utility.

> > become deranged as a result of some weird virus that makes them politicized
> > zombies, perhaps we ought to place our trust in some artificial
intelligence
>
> It is impossible to achieve quantitative infectivity on a genetically
> diverse populace with a given pathogen without full knowledge about
> the diff list.

We can't know that it is impossible to achieve quantitative infectivity on a
genetically diverse populace with a given pathogen without full knowledge about
the diff list until it has been tried experimentally.

> > which remains impervious to such an attack, some AI which remains sane and
> > balanced. Shall we trust the natural intelligence of Hitler and Stalin more
than
>
> What makes you think an AI will remain sane and balanced? Clearly it can't.

"Clearly"? This is a sane and balanced analysis? Hardly.
Sane and balanced humans know that the majority of humans have displayed neither
sanity nor balance in the last five thousand years of history.

> > the robot intelligence of our own device?
>
> I don't know what you're smoking, but I wish I had some of it, it
> seems to be powerful stuff.

Indeed, you wish, but "if wishes were horses, beggars would ride." You obviously
need some powerful psychotropic chemicals to jolt you out of your neurosis.

> 1) due to nature of these technologies sustainable relinquishment
> doesn't work, and some of the countermeasures make the original
> problem set pale in comparison

You're definitely right about that. "Relinquishment" is a feeble attempt to make
one's security blanket cover one's present anxieties.

> 2) these technologies are necessary to move on to the next levels of
> sophistication

Perhaps these technologies are necessary to move humanity past its age old
infantile predilection for mystery and myth. God is dead, supplanted by AI.

> 3) if don't do that, we're screwed on the long run, anyway

We're screwed in the long run no matter what.
---------------------
"A slipping gear could let your M203 grenade launcher fire when you
least expect it. That would make you quite unpopular in what's left
of your unit." -- In the August 1993 issue, page 9, of PS magazine,
the Army's magazine of preventive maintenance



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:27 MDT