RE: Singularity: AI Morality

Billy Brown (bbrown@conemsco.com)
Thu, 10 Dec 1998 12:59:38 -0600

Samael wrote:
> > In an AI, there is only one goal system. When it is trying to decide if
an
> > action is moral, it evaluates it against whatever rules it uses for such
> > things and comes up with a single answer. There is no 'struggle to do
the
> > right thing', because there are no conflicting motivations..
>
> Unless it has numerous different factos which contribute towards it's
rules..
> After all, it would probably have the same problems with certain
situations
> that we would. Would it think that the ends justify the means? What
> variance would it allow for different possibilities? It would be better
at
> predicting outcomes from its actions, but it stil wouldn't be perfect..
>
> Samael

The AI won't necessarily have a clear answer to a moral question, any more than we do. However, my point is that it won't have more than one answer - there is no 'my heart says yes but my mind says no' phenomenon.

Billy Brown
bbrown@conemsco.com