Controlling AI (was: Thinking about the future...)

Peter Voss (p.voss@ix.netcom.com)
Sun, 25 Aug 1996 10:17:22 -0700


At 06:48 AM 25/8/96 UT, David Musick raises an important issue:

>... It's also only a matter of time until very advanced artificial
intelligence >develops, and develops capabilities far exceeding human
capabilities.

Agree. I think this is will be a 'singularity' event, more so than
nano-tech. Nano-tech without AI will be quite limited, whereas AI will be
able to develop nano.

>I also see no reason to suppose that humans will have control over these
>developments, past a certain point...
....
>I think some very advanced life forms will eventually emerge through
>technology. .... I don't think that current forms of life will really
>have much of a chance against these advanced forms.

I'll die trying! That's why I think it's crucial that we have major
breakthroughs in philosophy, ethics and psychology before AI outsmarts us
totally. If we can figure out what the purpose of lives is, how we determine
values and how to motivate ourselves in a way that will achieve our goals,
then we have a chance of developing AI that shares our purposes. It seems
that AI and AL (life) will also have some sort of basic pain/pleasure
motivator and some preassigned goals. Shades of Azimov's three robotic laws
? Another strategy is to develop AI firstly as an extension to our own
minds, to give us extra knowledge, IQ and creativity before AI gets too
autonomous. Key to both of these strategies is that we know what we're doing
and why we're doing it - that we don't leave AI design up to
random/evolutionary design (don't get me wrong - I am definitely not a
statist. I'm talking about the preferred scientific approach).

>This actually isn't very disturbing to me -- I sort of think it's a good
>thing. Survival of the fittest. We're all for it when we're the fittest.

I don't think it's a good think at all! My life is much more important to me
than the abstract concept of the theory of evolution. Seeing AI & AL happen
is an important value to me, but never more important than my survival and
well-being. In fact, the only reason AI & Al are of value to me, is to the
extent that they expand my life (and the lives of people dear to me). An
important philosophical issue I'd say.

Peter