Re: TECH: Fractal Tardis Brains

hal@finney.org
Mon, 14 Jun 1999 22:34:56 -0700

Eliezer S. Yudkowsky, <sentience@pobox.com>, writes, quoting me:
> > Moravec has an interesting thought experiment in which a CA model like
> > Conway's game of Life runs for long enough to evolve living organisms
> > which then develop intelligence. Would you say that this is possible?
>
> Plain intelligence? Sure. Probably the vast majority of races across
> the Reality are non-conscious until they Transcend. We're exceptions,
> but, quite obviously, the only consciously observed exceptions in the
> absence of a Singularity.

That's an interesting possibility. So they could evolve intelligence, emotions, and many of the other trappings of life similar to ours, but they would be lacking qualia. They would not be conscious in the sense that we are. They would process information, they would have models of the world as we do, it would probably be meaningful for them to use the words "I" and "me" in conversations. But something would be different.

How would this difference manifest itself? Would it be possible to convince such a being that we humans have some "spark", some kind of primary, irreducible experience of reality, that they don't have? Suppose they aren't sure initially that there can be any experience of reality beyond what they have. What empirical test can we offer, what capability would we have that they do not? If they were blind, we could talk about how we can tell what is happening at a distance without having to go and touch it. What can we say if they are blind to qualia?

Maybe, after all, I don't have qualia in the sense that Eliezer does. Perhaps my basic sense of the universe is fundamentally different from his. We both react to the same universe and so there is a certain basic commonality of representation and reasoning, but perhaps the raw, nitty gritty irreducible nature of reality is totally different for us. How could we detect this lack on my part?

> The one thing that got seared into my memory by my attempt to formalize
> instantiation is never, ever, ever believe in epiphenomena. *Anything*
> real has to be experimentally detectable, including the property of
> reality itself. Anything "real" has exhibit different behavior than
> things that are "not real". Otherwise you've got epiphenomena, a zombie
> theory of reality. A Turing computation, being a Platonic object,
> proceeds just the same whether it's "instantiated" or "not instantiated"
> in our reality. This is probably the fundamental reason why nobody will
> ever define instantiation; zombie theories of reality are as silly as
> zombie theories of consciousness.

Consider a situation like in The Matrix, where people have their brains directly interfaced to a computer simulation. Are objects in the simulation "real"? I would assume not. But what is the different behavior which would reveal their unreality? Are you saying that there is some way of distinguishing any simulation from reality? What is the trick?

Hal