controlling AI

Tim_Robbins@aacte.nche.edu
Tue, 27 Aug 1996 11:51:49 -0500


All this discussion of controlling AI (something we want but that will
eventually be smarter and more powerful than us). We want to survive
(being human may have it's limits, but we do have some fun)--how do
we (humans/transhumans) continue in a universe as inferior to AI? The
most promising suggestions so far seem to be contructed around making
us indispensable to the life-processes of whatever AIs develop. In a
way, designing them from the start to be either dependent on us, or
symbiotic. The discussions so far have been good at stating that any
such AI system will have some corollary to the basic motivating
responses of living organisms--analogues to pleasure, pain, fear,
ambition, want, aversion, etc.

So why not start from the beginning with a basic motivational structure
that assures our existence, even primacy, in an AI dominated system.
Why not design their basic motivational structures around LOVE or
CHERISHMENT of humans and humanity. In the way the we love our
friends, lovers, children or even pets. This type of motivational structure
would also be certain to be passed on to the superior AI designed by
other AI--since they would want their creation to have that same
concern or appreciation for that which they love or cherish.

It shouldn't be any more difficult to design, after many basic protoypes, a
love response any more than a pleasure response.

I am also trying to point out here that what humanity "loves" is basically
arbitrary except that in enhances our survival and is a legacy of our
evolution. We find if much easier to cherish trees or waterfalls than
spiders and stagnant, fetid ponds. But it's an arbitrary response. Not
only do we love our children or pets--but seek them out, because we
want to find something to love and cherish.

Why couldn't a similar motivational structure be used to "control" AI?

I certainly don't see my dog as a threat or burden, even though I am
superior. And basically I let it do what it wants, and keep it fed and
cared for, and play with it, and teach it, and take it to the vet when it gets
sick.

I don't think I would mind being a "pet" of a superintelligent AI system, so
long as I had reasonable volition and freedom--we could ensure that too.

Don't think I'm over committed to the pet analogy. I prefer to consider
such a hypothetical constructed dependency/symbiosis to be more as
friendship. With the whole linkage being an ethic of caring and concern.
If they love/cherish our "humanity", part of that would be freedom and
autonomy.

I think the advantage of such a relationship, is that a hyperintelligent
being might try and redesign itself or it's descendants away from
dependence on us--but a value/response structure of caring would
purposefully be passed to descendants.

-Timber