Re: Why would AI want to be friendly?

From: Emlyn O'Regan (emlyn@one.net.au)
Date: Tue Sep 05 2000 - 13:27:33 MDT


> If, as seems to be the default scenario, all supergoals are ultimately
> arbitrary, then the superintelligence should do what we ask it to, for
lack of
> anything better to do.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>

I'd be a bit hesistant about that claim. I'd think all you could say is, an
AI will take what we ask it to do, think about it, and do something, which
is influenced by our request, but may not map over what we might have
considered the solution space of that request.

I don't think we can reasonably predict how an SI would behave. If we could
predict it's behaviour, why would we need it at all? I guess you could argue
that we can predict the type of behaviour (eg: generally it will go along
with our requests) without predicting it's precise implementation of that
behaviour.

I grok the "all supergoals are ultimately arbitrary" line. But I think it's
a crock. Something keeps us puny humans doing what we do, and it's more than
our incessant stupidity. There is some fundamental drive for enlightenment,
which keeps people like yourself moving "forward". I think your SI must have
that too, how else will it manage to drive itself from seed to SI status?

In fact, I think the likelihood that it would do what we ask, for want of a
better idea, is not supportable. Surely, suggestions of ours would seem
pathetic, banal, purile, or just idiotic to an SI. Even if the SI is so
smart as to have total knowledge of the entire universe, such that any
course of action is as meaningless as any other to it, surely it would be
looking for something more; something outside the box. It couldn't get to be
what it is without that desire.

Emlyn



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:13 MDT