Re: Controlling AI (was: Thinking about the future...)

Max More (maxmore@primenet.com)
Mon, 26 Aug 1996 10:38:46 -0700 (MST)


At 01:06 PM 8/26/96 GMT2, Stephen de Vries wrote:
>
>What if you invented an algorithm which is crucial to the workings of
>a very succesfull a-life organism which is set to replace humanity.
>A meme you have created will be immortal, and the father-mother of a
>new era in evolution. Do you want to live on through your genes, or
>your memes ?

Peter will have his own answer, but mine is: NEITHER! Of course I'm not
particularly interested in living on through my genes (although I'd take
that option if it were the *only* one). While I want my memes, or some of
them, or those that survive critical testing, to live on, I'm vastly more
interested in *me* living on.

I am not my memes. I have ideas, as well as memories, dispositions, values,
and so on, but it's *me* -- the active, choosing being that I want to
survive and flourish, not primarily some ideas that I have.

So I thoroughly agree with Peter Voss's suggestions. I will encourage AI
researchers developing full blown AI/SI's to build in a value system that
makes them less likely to destroy humans or disregard our interests. I would
much rather support the development of synthetic intelligence systems that
can directly augment our own intelligence, so we don't get left behind.

Fortunately, I suspect, SI (Synthetic Intelligence) research will mostly be
devoted to developing systems specialized for particular purposes, like air
traffic control, finding patterns in cosmological data, discovering new
mathematical theorums, inventing new drugs, etc. That's where the money
lies. I don't see so much commercial interest in pushing for a humanlike
intelligence. Obviously, though, once the components are developed, someone
will want to put them together just to show that it can be done.

I delight in advanced technology, but have no interest in self-sacrifice. I
want to live forever (or as long as I choose). This is vastly more important
to me than seeing that some other lifeform is created. I'm glad that Peter
raised this question. It deserves serious consideration.

BTW, I finished reading Host (by Peter James). I found it extremely hard to
put down and can highly recommend it, despite some criticisms I have of it.
Extremely accurate in its description of cryonics and AI, and genuinely
scary. Reading it may lead the AI-at-any-cost folks to reconsider!

Upward and Outward!

Max

Max More, Ph.D.
maxmore@primenet.com
http://www.primenet.com/~maxmore
President: Extropy Institute (ExI)
Editor: Extropy
310-398-0375
http://www.primenet.com/~maxmore/extropy.htm