Well I do confess to a perhaps inordinate fondness for
James Cameron epics--but "friendly" is of course a
human concept which, arguably, is not logic-based--or
wouldn't be for an entity not dependent upon friends
for its own well-being.
I present the following scenario: AI programmed to be
friendly and oversee humanity, exterminating pesky
aggressors. Something goes wrong with AI. Humans try
to shut it down to avoid catastrophe. AI perceives
this as aggression (after all, why would we want to
harm Mr. Friendly..?), and so on...
Just what do you think you're doing, Dave?
HAL was unarmed.
--- "Eliezer S. Yudkowsky" <firstname.lastname@example.org>
> John Marlow wrote:
> > Okay, call me self-aggrandizing, but this has for
> > time been my take on entrusting our fates to
> > Marlow's Paradox:
> > “We cannot entrust our fate to machines without
> > emotions, for they have no compassion; we cannot
> > entrust our fate to machines with emotions, for
> > are unpredictable.”
> A Friendly AI is neither emotional nor unemotional.
> It is simply
> > Anything purely logical would exterminate us as
> > unpredictable and dangerous. Anything emotional is
> > itself unpredictable and dangerous.
> You, sir, have been watching too much Hollywood
> cognitive science. The
> desire to exterminate unpredictable and dangerous
> things is itself an
> There is nothing inconsistent about the idea of a
> 'logical' (intelligent)
> entity whose goal is to be Friendly. (Why isn't it
> selfish? Because
> selfishness is an evolved attribute, and complex
> functional adaptations
> don't just materialize in source code. So how does
> the Friendliness get
> into the contents of cognition? Because we put it
> there. That basic
> inequality is what makes it all possible.)
> -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for
> Artificial Intelligence
Do You Yahoo!?
Yahoo! Photos - Share your holiday photos online!
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:18 MDT