Re: TECH: Fractal Tardis Brains

hal@rain.org
Sun, 13 Jun 1999 23:26:49 -0700

Eliezer S. Yudkowsky, <sentience@pobox.com>, writes, quoting me:
> > Or do you
> > mean that computational worlds holding intelligent entities may exist,
> > but that our own particular world is not computational, because of certain
> > specific characteristics that may not be shared with other worlds?
>
> Yes. Turing machines can be intelligent, but "intelligence" is an
> observer-relative property; there's no absolute property test for
> "intelligence". I think that having any sort of absolute property test
> obviously requires an absolute test for "instantiation", a concept which
> has no mathematical definition. In fact, my attempts to construct a
> definition led me to think that instantiation is fundamentally
> observer-relative. Since I believe in an absolute test for "reality"
> and "consciousness", ergo reality and consciousness are noncomputable.
> But I don't believe in an absolute test for "intelligence", and so I see
> no reason why I can't construct a transhuman AI.

So your transhuman AI would be intelligent but not conscious?

Do you believe then in zombies, beings which act conscious but which actually are not? Or would you say that any intelligent computer program would (in some sense) know that it is not conscious?

Moravec has an interesting thought experiment in which a CA model like Conway's game of Life runs for long enough to evolve living organisms which then develop intelligence. Would you say that this is possible?

If so, could they then evolve communication, societies, emotions, fiction, philosophical speculations about the meaning of life? Or would some of these avenues be foreclosed to them because they are computationally bound?

I don't see where to draw the line here. I don't see anything stopping these beings from evolving intelligence and emotion and all the other characteristics we would associate with consciousness. What gives us the privileged position to step in and say that although they act like they are conscious, and claim to be conscious, they actually are not?

The one legitimate argument I could see along these lines would be the idea that computability is simply too weak a mechanism to produce minds like ours. Penrose claims that our minds rely on some form of ultra computability, which goes beyond Turing computability. Presumably in this model no matter how long you let a computer evolve or how hard you work at programming it, it will never show the full level of functional mental competence that human beings have (including, per Penrose, the ability to do mathematics as well as humans). A more powerful primitive is needed to allow that level of functionality.

Although I don't think there is much empirical evidence for this position, it is more philosophically attractive than accepting the existence of zombies. It seems that rejecting computationalism requires you to accept one of these two possibilities.

> I'm extremely conservative when it comes to reality. I'm willing to
> believe that quarks are objectively real. I'm willing to believe that
> qualia are objectively real. I don't believe in the laws of physics,
> mathematical theorems, apples, or any other abstracted properties.

I am having some trouble following you here. What do you mean, you don't believe in the laws of physics? Didn't you say earlier that you thought they were "real" (and hence malleable)?

Hal