Summary: Comparative AI disasters

Eliezer S. Yudkowsky (sentience@pobox.com)
Fri, 26 Feb 1999 01:52:13 -0600

If you think goals are arbitrary, you'll just graft them on. You have to think of goals as a function of the entire organic reasoning system. To hell with whether the Ultimate Truth is that goals are arbitrary or objective or frotzed applesauce. If you want to design an AI so that goals stick around long enough to matter, you'd better not walk around thinking of goals as arbitrary.

You have to justify the goals, thus distributing them through the entire knowledge base. You have to reduce the goals to cognitive components. You have to avoid special cases. You have to make the goals declarative thoughts rather than pieces of procedural code.

Whack an arbitrary goal and it falls apart. Whack an integrated goal and it regenerates. This has nothing to do with morality; it's just pure systems logic.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.