Re: Singularity?

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 01 Sep 1999 10:58:17 -0500

Eugene Leitl wrote:
>
> Billy Brown writes:
>
> > Eli isn't the only one. I figure that whether or not this scenario happens
> > wil be determined by the laws of physics, so I'm not worried about causing a
> "Will be determined by the laws of physics" is a remarkably
> contentless statement. Everything's (apart from Divine Intervention,
> which most people here believe don't exist) determined by the laws of
> physics. So what?

Not "determined by" in the sense of "caused by" - "determined by" in the sense of "having only one outcome". If I say 2 + 2 = 5, undoubtedly my answer was caused by the laws of physics acting on my neurons, but the answer certainly wasn't determined by the laws of physics! The answer determined by physics is 4.

Likewise, _The Adaptive Mind_ (thanks, Paul Hughes!) makes a very careful distinction between saying that biology is *caused* by physics, and saying that biology *reduces* to physics - between *causation* of pattern and *transmission* of pattern.

Our hypothesis is that the motives of a sane Power are copied directly from the logic of the Universe. We don't have any reason to believe that you can't build insane Powers, but we do think that you can't dictate the ultimate goals of a Power by pattern-transmission from the initial conditions. The vast majority of cognitive patterns that don't converge to sanity also won't converge to anything else - and that's a causal statement, not a statistical one; to the degree that you can make your Power's motives converge towards *anything*, we think they'll tend to converge towards a structure which accurately represents our hypothesized "true" or "objectively correct" goals.

You can't draw a sharp line between internal representation-ness and external representation-ness. There's a reason why "Self-awareness and will" is a single chapter in _Coding a Transhuman AI_.

> > disaster that would not otherwise have occured. I am, however, very
> > concerned about the potential for a future in which AI turns out to be easy,
> > and the first example is built by some misguided band of Asimov-law
> > enthusiasts.

Same here.

> Fortunately, whoever believes in fairy-tales like Asimov's laws is
> (due to obvious extreme incompetence) quite unlikely to bootstrap the
> first AI.

That wasn't the attitude around here before *I* started screaming about the fairy-tale nature of Asimov Laws, as I recall. Though naturally I'm ecstatic that the concept I've been pushing - "Controlling minds with Asimov Laws is like going faster-than-light with Warp Drive" - is now taken for granted even by famed skeptic Eugene Leitl, I wouldn't be at all surprised that to find a majority of AI researchers are still talking about it.

> To make this somewhat less noise: I think rushing AI is at least as
> bad as rushing nano. Relying on best-case scenario (where the the
> first transcendee is deliberately holding back (*all*) the horses to
> allow everybody to go on the bus) is foolish at best. To begin, the
> perpetuator might be not human to start with.

So what *do* you propose? Now I'm curious.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way