Re: Why would an AI want to be friendly

From: Michael S. Lorrey (retroman@turbont.net)
Date: Wed Sep 27 2000 - 16:34:10 MDT


"J. R. Molloy" wrote:
>
> Brian D Williams writes,
>
> > Well, there the William Gibson approach from Neuromancer, an
> > electromagnetic shotgun wired to it's forehead.
>
> Good one, Brian. Of course this would make AIs want to avoid the shotgun blast,
> and not really make them want to be friendly. Some people have theorized that
> people are friendly because people fear even more the consequences of being
> unfriendly.

An excellent point. Negative reinforcement, whether physical or verbal, does
work on kids, though it is needed less when positive is used as well. An AI
should be bright enough to quickly self program itself in response to positive
and negative stimuli. That is what parenting is all about: teaching your kids to
be good people. One of the prime problems in todays world is that old cultural
standards of parenting were tossed out by many with nothing else to replace them
but half-witted muddle headed mushy ideas that did not evolve over time, and
were typically found wanting. Without the extended family today, and with so
many broken families today, there is little leadership by example for most kids
to go by to learn it. Relying on on-the-job training without supervision for one
of the most important jobs around is hardly the way to go about it. I would hope
such an approach is not taken with AI.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:16 MDT