Re: Singurapture

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 30 2001 - 22:21:59 MDT


John Marlow wrote:
>
> Indeed it is! 'Twas an attempt to make a point humorously: The only
> way to get rid of some problems is to get rid of us. We're buggy.
>
> On 30 Apr 2001, at 3:56, Eliezer S. Yudkowsky wrote:
> >
> > Why, look, it's a subgoal stomping on a supergoal.
> >
> > http://singinst.org/CaTAI/friendly/design/generic.html#stomp
> > http://singinst.org/CaTAI/friendly/info/indexfaq.html#q_2.12
> >
> > See also "Riemann Hypothesis Catastrophe" in the glossary:
> >
> > http://singinst.org/CaTAI/meta/glossary.html

John, would you *please* actually follow the links? You are flogging a
dead horse. A subgoal stomp - in particular, a Riemann Hypothesis
Catastrophe, which is what you've just finished reinventing - is not a
plausible failure of Friendliness for the class of architectures described
in "Friendly AI".

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT