Re: Qualia and the Galactic Loony Bin

Eliezer S. Yudkowsky (sentience@pobox.com)
Sat, 19 Jun 1999 21:52:14 -0500

hal@finney.org wrote:
>
> I will take the liberty of replying since Eliezer has not. I think
> this is a very interesting question, and I feel that I do understand
> his reasoning on this point. He is welcome to correct or clarify any
> mistakes I make.

Huh - I thought I replied to Wei Dai's, but now that I think about it, I think I had to shut down and the message got sent to the Drafts folder. Hal Finney has done a better job of explaining it.

> I believe Eliezer's logic is that if it is impossible to define whether
> a computation is instantiated, then there is no "fact of the matter" as
> to whether any given computation is instantiated in any given system.
> But he takes as given, based on his personal experience, that it is
> a definite fact that his qualia exist. It follows that qualia cannot
> result merely from computation.

Exactly. Well, I don't know that I take qualia as a "given", you understand, but I think there's an, oh, at least 80% probability that qualia are a definite fact. When the logic of "I think therefore I am" becomes this complicated, it can no longer be trusted - but the qualia themselves seem impossible to doubt. Even the fact that I might be suffering from racial insanity doesn't let me doubt that I, right now, am experiencing the black-on-white of my computer monitor.

> Now I am speculating a bit, but I believe that Eliezer's position
> is that qualia are not a crucial and necessary aspect of the mental
> structure of a successful organism. It should be entirely possible
> for intelligent beings to evolve and succeed without qualia, because
> all that is really necessary is the ability to model the environment,
> extrapolate possible events, come up with plans, and so on, all of which
> seem to be computational tasks (hence not requiring qualia).

Again, absolutely right.

> If this is the case, there is no reason for qualia to evolve. Given that
> we (or at least Eliezer!) have qualia, this must be what Gould calls a
> "spandrel", a sort of evolutionary by-product which has no functional or
> adaptive significance.

Qualia have evolutionary significance - just not *tremendous* significance, or a significance that can be achieved in no other way. Penrose speculates about quantum computing, for example. Well, that would probably be a minor evolutionary advantage - I don't think that our brains exploit it in any huge way that would require mountain-sized brains to duplicate otherwise, but I can easily see it as speeding up a few computations.

> Since qualia are not selected for, they probably
> do not normally appear as the result of the evolutionary process, and
> so most intelligent races would not have them.

They'd be selected for, or rather the underlying physical shortcuts would be selected for, if they appeared spontaneously in a self-programming brain complex enough to use them. Even qualia might be selected for, if "consciousness" does a better job of being the system bus, the binding world-model. I guess qualia are very rare because the original physics hack seems like a very improbable thing. I could easily be wrong, but I'd be really surprised to find more than 1% of races conscious and not at all surprised to find only one in quadrillions.

> As Eliezer says above,
> the only reason we have qualia is because the universe is so large that
> some few intelligent races have ended up with them by happenstance,
> and then he invokes the anthropic principle to say that we must be one
> of that minority, since we are conscious.

Precisely. This is why I keep talking about ten-to-the-Nth other Universes; the improbability of qualia seems to demand a huge population for the Anthropic Principle to operate on.

> > Also, do you think it is possible to program a Turing-computable being to
> > behave as if it has qualia even though it doesn't really? If so how can we
> > tell we are not such beings? I'm thinking "I have qualia" but that is what
> > such a being would be thinking as well. Conversely, how do we know it's
> > not the case that non-sentient animals also have qualia, but they just
> > can't communicate that fact to us?
>
> Based on Eliezer's model, it would seem impossible to faithfully
> replicate all of the characteristics of qualia in a computational being.
> Although he might be able to approximate or mimic the behavior of a
> being who actually had qualia, there would inevitably be some subtle
> differences which could in principle be detected (although it might take
> a super-Turing being to tell the difference?).

I think it depends on the resolution of the modeling. With a Giant Lookup Table you can obviously model all of our input-output characteristics. Likewise, since humans aren't very eloquent on the subject of consciousness, any AI with access to the 'Net could easily be as eloquent as Penrose or Chalmers or, now that I've posted on Extropians, me. But I do think that the behavior wouldn't arise naturally.

> > When you talk about non-Turing-computable "Weird Physics" are you thinking
> > of physics that would be computable under a more powerful model of
> > computation (for example TM with various oracles), or something entirely
> > different?
>
> I think to answer this you have to go back to his basic argument
> regarding the vagueness of instantiation. If you accept that, then
> augmenting the TM with various oracles would still seem to leave the
> problem of indeterminacy of when such an augmented TM is instantiated.
> Now, if the qualia are somehow in the oracle itself, so that it doesn't
> matter what kind of TM is attached to it, then this would be strong
> enough to allow for qualia. But if the existence of qualia depends on a
> particular program being run in the TM, in addition to any functionality
> in the oracle, then the difficulties with instantiating that TM crop up,
> so this would not be strong enough.

Right. Adding oracle capabilities to a TM doesn't solve the mathematical problem of instantiation. As I once said about Perl and object orientation, teaching a horse to moo doesn't make it a cow. Oracle capabilities are icing on the cake; you still have rigidly separated discrete rules and discrete data.

"Something entirely different," absolutely. All the intuitive models of physics were wrong, and I would expect any humanly understandable model of reality to be untrue.

> Personally, I largely agree with Eliezer's reasoning here (at least as I
> have interpreted it) but I question the premise about uncertainty of
> implementation. This is where I look for an answer.

Good luck.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way