So why not start from the beginning with a basic motivational structure
that assures our existence, even primacy, in an AI dominated system.
Why not design their basic motivational structures around LOVE or
CHERISHMENT of humans and humanity. In the way the we love our
friends, lovers, children or even pets. This type of motivational structure
would also be certain to be passed on to the superior AI designed by
other AI--since they would want their creation to have that same
concern or appreciation for that which they love or cherish.
It shouldn't be any more difficult to design, after many basic protoypes, a
love response any more than a pleasure response.
I am also trying to point out here that what humanity "loves" is basically
arbitrary except that in enhances our survival and is a legacy of our
evolution. We find if much easier to cherish trees or waterfalls than
spiders and stagnant, fetid ponds. But it's an arbitrary response. Not
only do we love our children or pets--but seek them out, because we
want to find something to love and cherish.
Why couldn't a similar motivational structure be used to "control" AI?
I certainly don't see my dog as a threat or burden, even though I am
superior. And basically I let it do what it wants, and keep it fed and
cared for, and play with it, and teach it, and take it to the vet when it gets
I don't think I would mind being a "pet" of a superintelligent AI system, so
long as I had reasonable volition and freedom--we could ensure that too.
Don't think I'm over committed to the pet analogy. I prefer to consider
such a hypothetical constructed dependency/symbiosis to be more as
friendship. With the whole linkage being an ethic of caring and concern.
If they love/cherish our "humanity", part of that would be freedom and
I think the advantage of such a relationship, is that a hyperintelligent
being might try and redesign itself or it's descendants away from
dependence on us--but a value/response structure of caring would
purposefully be passed to descendants.