Re: Personal goal system was: IA vs. AI

Eliezer S. Yudkowsky (
Sun, 08 Aug 1999 22:07:37 -0500

Edwin Evans wrote:
> Eliezer, what do you mean by:
> > Even in a
> > deterministic Universe, there are still causal dependencies between past
> > and present
> That's what I thought a deterministic universe is. One where the present
> instant depends on the instant before.

In a quantum-random Universe too, of course. The point is that the common visualization of "determinism" involves a Universe that converges to a given future regardless of the past - tell anyone that the future is "specified" and that's what they'll hear. In such a case, of course, free will wouldn't exist. The idea that the future can be both predetermined and causally reliant on the present requires some thought, or training in deterministic formalisms like Turing computability.

> I don't think its the paradox (between thinking I have control and being
> controlled by low level processes) that is an illusion. The paradox has an
> answer. The feeling of control is an illusion.

While I'm still trying to find a good mathematical definition, I don't think it's at all inappropriate to speak of high-level processes controlling low-level processes. But roughly, if the higher-level process is simpler - if you can simulate it using a smaller Turing machine - than it may make sense to view that process as being self-directed. All abstractions are unreal in a sense, but the non-abstract level is always far too detailed for a human mind to understand - and if the only way to predict and manipulate reality is to deal with the abstraction, which it is, then I guess we're stuck with the abstraction. Besides, nobody is claiming that the abstractions are real - just that they're useful. Not in the make-a-better-person sense, perhaps, but certainly in the deal-with-reality sense.

What it comes down to is causal patterns. My image of these words may only be events in the visual cortex, but the pattern of that image mirrors the photons hitting my eyeballs, which mirrors the screen, which mirrors the VRAM, which mirrors the letters I'm typing. I think it's a valid mode of thought to skip the reductionist perspective and the chains of transmission and just say that it's the same pattern, since that pattern-identity is both perfect, predictive, and manipulative.

In the same way, it makes sense for Paul to speak of choosing actions, or for me to speak of reasoning to the truth. In one sense we're only neurons, but in an another sense we're the higher-level patterns of thought, of rationality, or of truth incarnated in the neural substrate.

The issue ultimately boils down to the basic question of semantics: When does A mean B? I would argue that to make any choices at all, you have

> Paul, if you feel you need to believe in free will, then I would ask you if
> it was enough to believe in the possibility of it. I also wonder what
> "assortment of things" I shouldn't do because I don't think I have free
> will. I think I should do what is right. (It seems like its mostly just a
> theoretical or blaming issue. The only practical thing I can think of right
> now is whether I argue that I have free will or that I don't have free
> will.)

> Also, Eliezer wrote:
> > So
> > purposes, if not energy and short-term motivations, definitely fall into
> > the "software" category.
> Huh? Purposes are definitely software but short-term motivations are not?

They're a lot harder to manipulate. Consider the relative difficulty of deciding to saw off your hand with a rusty butter knife and actually doing it.

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way