"Eliezer S. Yudkowsky" wrote:
>
> Darin Sunley wrote:
> >
> > I just visualized a nanite swarm bleeding out of the computer console,
> > enveloping all human beings in the room, and receding, leaving only an
> > elegant reliquary [sp?] [container for relics, medieval Catholic church
> > thing] containing Eliezer's still-beating heart.
>
> Pretty much, yes. There's a gruesome little story here if anyone wants to
> write it.
>
> Friendly AI, rule #7: "If the AI *wants* to violate Friendliness, you've
> already lost."
>
> If it gets to the point where explosives packed around the mainframe start to
> look reassuring to the clueless, you are already screwed over so thoroughly
> that a strategic nuke isn't going to help. Every non-nitwit safeguard happens
> *before* a transhuman AI decides it hates you.
While using such safeguards in paranoid concern over it getting 'out of control'
ought to just about guarrantee that it WILL hate you.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:20 MDT