Re: Frontiers of Friendly AI

From: Damien Broderick (d.broderick@english.unimelb.edu.au)
Date: Thu Sep 28 2000 - 19:35:59 MDT


At 01:59 PM 28/09/00 -0400, Dan wrote:

>Alternately, you hopefully remembered to build the goal system so that
>it was not supposed to pursue (sub)goals X, Y and Z but to pursue
>(sub)goal A: "try to do what we ask you to do, even if we change our
>minds." So, at the primitive stages, doing what you ask shouldn't be
>a problem, presuming it's at least complex enough to understand what
>you're asking for in the first place.

Well... This is Stanislaw Lem territory, but the clearest and funniest and
scariest dramatization I know is Egan's QUARANTINE, where a loyalty mod is
circumvented by all-too-plausible sophistry (or is it sheer logic?),
enabling an enslaved mind effectively to burst free by re-entrant
redefinition of its goals and strategies *while remaining enslaved to the
mod*.

Damien Broderick



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:20 MDT