Re: Why would AI want to be friendly?

From: Robin Hanson (rhanson@gmu.edu)
Date: Thu Sep 07 2000 - 10:26:05 MDT


Hal Finney wrote:
>our minds are composed of multiple agents, cooperating and competing.
>Each has its own skills and abilities, but also its own desires and
>agenda. ...
>The interesting question is whether AIs will be designed with a similar
>mental organization. Will they be beset by the inconsistencies and
>contradictions of our human minds with all their parts? Apparently it
>was the best evolution could do. Can we do better?

A good question. The ability to do abstract reasoning has only recently
been invented, and it clearly is tacked onto a brain that learned to make
choices without it. But given the ability to do abstract reasoning it
seems tempting to have just one abstract goal, and then give some central
module control over how much moment-to-moment influence to grant to
other lower level modules focused on more particular goals.

In particular if the abstract goal were the evolutionary one of "induce
as many long-term descendants as possible", such agents would seem to
have an ideal ability to adapt to new environments in an evolutionary
competition.

Humans do have an abstract reasoning module, and that module often tries
to give the impression that it is in fact in such control over the rest
of the mind. But in fact I think our conscious minds are more like the
PR department of our minds - they try to put a good spin on the decisions
that are made, but are not actually in much control over those decisions.

Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:29 MDT