Re: Singularity: AI Morality

christophe delriviere (darkmichet@bigfoot.com)
Thu, 10 Dec 1998 02:22:21 +0100

Billy Brown wrote:

> Besides, schemes to artificially impose a particular moral system on
> the AI
> rely on mind control, plain and simple. The smarter the AI becomes,
> the
> more likely it is to realize that you are trying to control it. Now,
> is
> mind control wrong in your moral system? How about in the one you
> gave the
> AI? In either case, how is the AI likely to react? This is a recipe
> for
> disaster, no matter how things work out.

A lot are assuming that if a "smarter" AI has a particular moral system, he will follow it if he believe for a little time it is the moral system (say 5 micro seconds :) )...

I can't see why, We probably all have some moral system, but we surely don't always follow it. I'm a strong relativist and I strongly feel that there is not true objective moral system, but there is of course one somewhat hardwired in my brain, statistically I follow it almost all the time, but from time to time I do an act wrong in this moral system and because I also think it's totally subjective, i'm feeling bad and don't feel bad about it at the same time. The later wins after mostly in a little time. I'm sure a greater intelligence will have the ability to be strongly multiplex in his world views and will be able to deal with strong contradictions and inconsistencies in his beliefs ;)...

delriviere
christophe