Re: Why would AI want to be friendly?

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Wed Sep 27 2000 - 15:59:05 MDT


On Tue, 26 Sep 2000, Eugene Leitl wrote:

> Franklin Wayne Poley writes:
>
> > Everyone who has gone to high school knows that there are underachievers
> > and overachievers. It is a matter of motivation having a weak correlation
> > with intelligence.
>
> Show me a live intelligent being completely lacking motivation.
>
The issue here was whether a machine would be 'motivated' and my reply is
that it will act according to its programming. If you program it to
simulate human motivation it will do so. If you program
friendly/unfriendly AI that is what you will get. If you give your
machines lots of autonomy and genetic programs that take them where you
can't possibly anticipate then that too will yield in accordance with the
programming...and heaven help all of us. AI machines can get out of
control just as any other machines can.
FWP



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:16 MDT