RE: Singularity: AI Morality

Billy Brown (bbrown@conemsco.com)
Fri, 11 Dec 1998 16:29:27 -0600

Robin Hanson wrote:
> You can you possibly know this about AIs? I know of a great many
> programs that have been written by AI researchers that use conflicting
> goal systems, where conflicts are resolved by some executive module.
> Maybe those approaches will win out in the end.

Sorry, I guess I should be more careful to use precise terminology. Here's what I was trying to say:

A sentient being will generally have goals. It chooses the goals that it will pursue, and it chooses the methods it will use to pursue them. A system of morality will influence both its choice of goals and the methods it uses to achieve them.

In humans, there seem to be many different ways for a goal to be selected - sometimes we make a logical choice, sometimes we rely on emotions, and sometimes we act on impulse. There also does not seem to be a unified system for placing constraints on the methods used to pursue these goals - sometimes a moral system's prohibitions are obeyed, and sometimes they are ignored.

If you want to implement a sentient AI, there is no obvious reason to do things this way. It would make more sense to implement as many mechanisms as you like for suggesting possible goals, then have a single system for selecting which ones to pursue. Likewise, if you are going to have constraints on what methods you use to pursue a goal, it makes sense for them to all be enforced by the same mechanism.

An AI implemented in this fashion would exhibit what I call 'unified will'. It would act on whatever moral system it believed in with a very high degree of consistency, because the tenets of that system would be enforced precisely and universally. It could still face moral quandaries, because it might have conflicting goals or limited information. However, it would never knowingly violate its own ethics, because it would always use the same set of rules to make a decision.

Billy Brown
bbrown@conemsco.com