Re: Why would AI want to be friendly? (Was: Congratulations to Eli,Brian ...)

From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 06 2000 - 18:50:29 MDT


Eugene Leitl writes:

> Witness the software industry: certainly not something evolving along
> a straight log plot. (Nor does hardware industry, look at the growth
> of nonlocal memory bandwidth). The only currently visible pathway
> towards robust AI is stealing from biology, reproducing the
> development of the brain in machina, or breeding a something exploting
> a given hardware architecture optimally by evolutionary algorithms.

Yes, using bio-tech and genetic programming to breed artificial intelligent life
sounds very promising. No matter how the job gets done, just do it. The sooner
the better. Incidentally, I don't actually trust artificially amplified
intelligence, whether biologically based or purely inorganic. But from what I've
seen of Homo sapiens sapiens, I trust the humans even less.

>
> > would evolve with higher and higher IQs. Those intelligent agents (IAs)
that
> > display unfriendly tendencies could simply be terminated. This is a
tremendous
>
> 1) Undecidedability, dude. 2) Containment. If the thing is useful, it
> is dangerous. If it is useful, and can be built and/or reverse
> engineered easily enough it will be used. If it will be used in
> large enough numbers for a long enough time, the thing either will
> express its hidden malignant tendency or mutate into a potentially
> dangerous shape. There is zilch you can do about it.

That description sounds a lot like my neighbor's kids. People are unpredictable,
and they don't seem to care how much they overpopulate the Earth. Very dangerous
indeed. As a neutral observer (one that will eventually die anyway, no matter
what form of life inherits the universe), I guess I'd just as soon die at the
hands of a greater-than-human-intelligence robot than die at the hands of some
retarded human religious warrior. No wait! That's not a guess, it's more like a
definite preference.

> AIs are very useful, since actual human intellects are scarce,
> expensive and slow to produce, and evolutionary algorithms are by far
> the simplest way to produce robust, open-ended intelligence. Look into
> the mirror.

Yep, there he is: an open-ended intelligence produced by evolutionary algorithms
staring back at me from the mirror. Can I trust this Homo sapiens sapiens more
or less than I'd trust one replicated from synthetic evolution? Frankly, it's a
toss-up.

> [insert derisive laughter here]

Okay, let me rephrase that: The advantage of working with Mind Children is that
(unlike biological children) you can preclude unfriendlies by terminating any of
them that laugh derisively, and replicating those that work hard at gaining your
favor. <grin>

--J. R.

"I will not be pushed, filed, stamped, indexed, briefed,
de-briefed, or numbered!" --Number 6



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:23 MDT