RE: Singularity: AI Morality

Robin Hanson (hanson@econ.berkeley.edu)
Wed, 09 Dec 1998 09:33:04 -0800

Billy Brown writes:
>> wisely select the values we give to the superintelligences, ...
>
>Guys, please, trust the programmers on programming questions, OK? ...
>Now, in the real world we can't even program a simple, static program
>without bugs. The more complex the system becomes, the more errors there
>will be. Given that a seed AI would consist of at least several hundred
>thousand lines of arcane, self-modifying code, it is impossible to predict
>its behavior with any great precision. Any static morality module will
>eventually break or be circumvented, and a dynamic one will itself mutate in
>unpredictable ways. The best that we can do is teach it how do deduce its
>own rules, and hope it comes up with a moral system requires it to be nice
>to fellow sentients.

Well we could do a little more; we might create lots of different AIs and observer how they treat each other in contained environments. We might then repeatedly select the ones whose behavior we deem "moral." And once we have creatures whose behavior seems stably "moral" we could release them to participate in the big world.

However, I'd expect evolutionary pressures to act again out in the big world, and so our only real reason for confidence in continued "moral" behavior would be expectations that such behavior would be rewarded in a world when most other creatures also act that way.

Robin Hanson

hanson@econ.berkeley.edu     http://hanson.berkeley.edu/   
RWJF Health Policy Scholar             FAX: 510-643-8614 
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884