From: Billy Brown <email@example.com>
To: firstname.lastname@example.org <email@example.com> Date: 10 December 1998 20:26
Subject: RE: Singularity: AI Morality
>> > In an AI, there is only one goal system. When it is trying to decide
>> > action is moral, it evaluates it against whatever rules it uses for
>> > things and comes up with a single answer. There is no 'struggle to do
>> > right thing', because there are no conflicting motivations..
>> Unless it has numerous different factos which contribute towards it's
>> After all, it would probably have the same problems with certain
>> that we would. Would it think that the ends justify the means? What
>> variance would it allow for different possibilities? It would be better
>> predicting outcomes from its actions, but it stil wouldn't be perfect..
>The AI won't necessarily have a clear answer to a moral question, any more
>than we do. However, my point is that it won't have more than one answer -
>there is no 'my heart says yes but my mind says no' phenomenon.
But it might have an 'objective' viewpoint.
"Tell me oh wise AI, is it moral to X'
"Oh suplicant, you are a utilitarian, so it is indeed moral to X. I would advise you to not let your christian neighbours find out as according to their moral system, it is wrong to X."