Re: Keeping AI at bay (was: How to help create a singularity)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon May 07 2001 - 10:56:26 MDT


Robert Wasley wrote:
>
> Lee Corbin wrote
> > It's as Eliezer (usually) never tires of stating: explicit emotions
> > such as these simply do not happen unless they're designed or evolved
> > somehow. Probably the toughest part is leaving behind our intuition that
> > where goes intelligence must go certain kinds of survival attitudes......
> > But artificial intelligence
> > isn't necessarily evolved in the way that natural intelligence is, and so
> > need not have such capabilities.
>
> This is a very good point and possibly event true. Nevertheless interacting
> with such
> an intelligence will truely be a very alien experience and as such the value
> of such intercourse
> would be very limited indeed. This is the reason designers around the world
> from basic
> consumer software to robots are trying to make them "human" so that we feel
> more comfortable
> interacting with them, thus deriving more value from the experience.

Even a Friendly AI might need to maintain a personality overlay that could
appear to take offense at insults and so on - a possibility I conceded
after watching episode 3 of "Bubblegum Crisis Tokyo 2040". I adjusted
pretty quickly by reminding myself of the AI's internal perspective - that
it was just requested behaviors being carried out, with no particular
meaning to the AI - after which I lost the impulse to flee screaming into
the night. I'm just not sure that the rest of humanity would adjust
equally quickly to an android waitress that sees nothing particularly
wrong or offensive or abnormal about, for example, being asked to lick
spilled coffee off of someone's boots, or an android secretary being asked
to laugh for two minutes. The point is that a personality overlay would
be maintained as a subgoal of Friendliness - of not sending people
screaming into the night - and wouldn't affect the underlying goal system.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT