> > In an AI, there is only one goal system. When it is trying to decide if
> > action is moral, it evaluates it against whatever rules it uses for such
> > things and comes up with a single answer. There is no 'struggle to do
> > right thing', because there are no conflicting motivations..
> Unless it has numerous different factos which contribute towards it's
> After all, it would probably have the same problems with certain
> that we would. Would it think that the ends justify the means? What
> variance would it allow for different possibilities? It would be better
> predicting outcomes from its actions, but it stil wouldn't be perfect..
The AI won't necessarily have a clear answer to a moral question, any more than we do. However, my point is that it won't have more than one answer - there is no 'my heart says yes but my mind says no' phenomenon.