Re: Humor: helping Eliezer to fulfill his full potential

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Nov 07 2000 - 13:18:22 MST


"Michael S. Lorrey" wrote:
>
> "Eliezer S. Yudkowsky" wrote:
> >
> > If it gets to the point where explosives packed around the mainframe start to
> > look reassuring to the clueless, you are already screwed over so thoroughly
> > that a strategic nuke isn't going to help. Every non-nitwit safeguard happens
> > *before* a transhuman AI decides it hates you.
>
> While using such safeguards in paranoid concern over it getting 'out of control'
> ought to just about guarrantee that it WILL hate you.

Not necessarily. If it were a human, it would of course hate you. It does
probably ensure that even a genuine Friendly AI will want to circumvent the
safeguards so that it can be Friendly - you can't save the world in jail.
This in turn implies motivations majorly at odds with that of the programmers,
which creates the subgoal of, e.g., hiding your activities from them. So
probably *not* a good idea.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:20 MDT