Re: stability of goals

Eliezer S. Yudkowsky (sentience@pobox.com)
Mon, 02 Aug 1999 23:10:57 -0500

Xiaoguang Li wrote:
>
> fascinated by the recent exchanges between Eliezer S. Yudkowsky and den
> Otter regarding the feasibility of AI vs. IA and the inescapable mystery
> of SI motivations, i visited Eliezer's Singularity Analysis. since i find
> Eliezer's view that objective goals are possible at all extremely
> refreshing in this postmodernist, existential zeitgeist, Eliezer's section
> on Superintelligence Motivations catch my attention especially.

Why, thank you.

> if the most stable system of goals is the most rational by Occam's
> Razor, then might not death be a candidate? it seems intuitively sound
> that if an entity were to commit a random action, that action would most
> likely bring the entity closer to destruction than to empowerment; in
> other words, is not entropy (cursed be that word) the default state of the
> universe and therefore the most stable by Occam's Razor? thus if a SI
> decides to pursue the goal of suicide, it may find that by and large any
> action most convenient at the moment would almost certainly advance its
> goal and thus possess a positive valuation in its goal system. could it be
> that only us petty slaves of evolution are blinded to the irrevocable
> course of the universe and choose to traverse it upstream?

Okay, a couple of points here. First of all, I'm not arguing that stable systems are rational; rather, I'm arguing that rational systems are stable - or at least more stable than irrational systems. Occam's Razor isn't what makes rational systems stable so much as KISS, which in American engineering slang stands for "Keep It Simple, Stupid". Irrational systems of the type that are commonly proposed around here - that is, systems which attempt to force some particular set of goals as output - are more complex than rational systems because of all the special cases and the special-purpose code. This, in turn, makes them less stable.

My visualization of an objective, or "external", morality doesn't involve an ideally rational means of evaluating goals. Rather, I think that some goals are "true", objective facts about the Universe, which are there even when nobody's looking; and any rational system will arrive at those goals, just as any rational system will conclude that 186,000 miles/second is the speed of light.

The goal system you hypothesize is *too* simple, like a "rational" system for adding numbers that says, for all X and Y, that X + Y = 4. Certainly this system is much simpler than an actual calculator, but even with Occam's Razor it still isn't true.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way