Re: Qualia and the Galactic Loony Bin

hal@finney.org
Tue, 22 Jun 1999 22:10:55 -0700

Harvey Newstrom, <newstrom@newstaffinc.com>, writes:
> These are simply absurdium arguments. You lead the participant down the
> slippery-slope to an absurd conclusion. You claim that just because the
> exact point of error cannot be determined, that the end conclusion cannot be
> disproved. This is nonsense, and a common logical fallacy.

It is true that this is similar to a logical fallacy. Day grades smoothly into night, and yet day is different from night.

However, in the case of qualia/consciousness, there is an important difference. Unlike the day-night case, these aren't defined with respect to some external observer. They exist in and of themselves. A system is conscious, or not, as an intrinsic property.

With day-into-night, it's fundamentally a matter of definitions, where we could choose some arbitrary dividing line. Consciousness is intrinsic. If it disappears, it must do so as part of a specific physical process.

> You examples are the same. I know that a living brain is conscious. I know
> that a chopped-up brain is dead. Instead of explaining how the chopped-up
> brain can be defined to be alive or conscious, you instead prefer to give
> the various slippery-slope examples between the live brain and the dead
> brain.

So as you transition from the living brain by a series of steps to the chopped up brain, somewhere along the line the consciousness must have disappeared. But each step was a small one, a division of a piece of the brain by two. How could such a small change cause consciousness to disappear?

There are two possibilities: consciousness disappeared gradually, a little at a time with each step, or consciousness disappeared suddenly, all there at one step and then some crucial piece of the brain was divided and consciousness vanished.

It is hard to see how the latter could happen. At each division we take a piece of the brain which is being given inputs, and divide it into two pieces, then keep giving each sub-piece the inputs it would have gotten from the other part. If you accept that this is OK to do at the very first step, then how can it be a problem at any other step?

As for the possibility that consciousness gradually fades away, this has problems of its own (Chalmers discusses this "fading qualia" example). If it is literally a matter of fading, where smells get less intense and reds become less red, then the person should notice this and comment on it. It is hard to see why he would not say something if he noticed his sensory impressions changing.

Perhaps instead it is a more subtle change, where the person just gradually becomes dead but keeps gesturing and walking around and talking about how intense his qualia are, never noticing that he is fading away. But the intermediate point where he is sort of a semi-zombie is a very bizarre state to imagine someone in. If people can't notice that they are only semi-conscious then maybe we are all in that state already.

I don't think these possibilities are completely impossible, but it is important to realize that this causal model does require some kind of bizarre and unintuitive behaviors in order to be consistent. And do see the other messages I referred to by Emlyn and Eliezer which more directly address the causal model.

Hal