Re: Informed consent and the exoself

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Mon Feb 21 2000 - 18:36:00 MST


Why not code in some contingent answers, more or less of the form: "Maybe
X is right," where X is one of our moral intuitions, but leave the total
probability that our moral intuitions are right far short of 100%.

These obviously aren't Asimov laws, since they're designed to be
overridden if the computer has some reason to do so. Thus the fact that
they can be "worked around" shouldn't bother us at all; nor do I think
these lines would backfire on us unexpectedly any more than any other line
of goal-code. While you might suppose that assigning our current beliefs
any probability greater than 0 is hubris, I'd say that giving them a small
positive value isn't lying; it's giving the AI the best information we
have available to us and saying "OK, now run with this."

A similar approach might involve "messages" which we have loaded into the
AI to help it accelerate its development; you discuss the possibility
favorably in "Coding a Transhuman AI." The idea here being that once the
AI is smart enough to understand our moral intuitions, we just tell it
 "We think that you ought to get informed consent before you harm
something/someone" and let the AI sort out whether we're right or not.
Assuming that it assigns some non-zero value to the probability that we're
right, we'll have a similar result, only, of course, in this case, it will
be the AI determining the probability that we're right, not us as in the
former case.

Anyway, there's some reason to believe that there's just no way that the
computer will derive informed consent without a lot of knowledge and/or
experience about the world around it, experience which we've had coded
into our genes over the course of millions of years, and which we would be
sitting ducks without. We're either going to have to wait millions of
years for the computer to figure out what we've had coded in already, (and
this time cannot be accelerated by faster computing power; these are facts
gotten from experience, not from deduction) or else we're going to have to
tell it about a few of our moral intuitions, and then let the AI decide
what to make of them.

Unless of course you're going to give it a whole psychology module replete
with scores of rejectable facts about human nature. Actually, you're
probably going to have to do that anyway, if you want the thing to speak
English within our lifetimes.

I'm not advocating a Cyc model, but if you think Elisson is ACTUALLY going
to start with "I need a goal object" and deduce the existence of taxes and
tapioca pudding from that, you're fooling yourself.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:03:59 MDT