Re: Qualia and the Galactic Loony Bin

Darin Sunley (rsunley@escape.ca)
Wed, 23 Jun 1999 14:49:50 -0500

hal@finney.org wrote:

> The point is that we took a situation where the brain was conscious,
> by the premise of causal connectivity based functionalism, and by
> substituting one set of signals for an identical set of signals, which is
> arguably no substitution at all, we produced a brain which is passively
> running a replay. So either this seemingly ineffectual substitution
> has eliminated consciousness, which seems hard to understand, or passive
> replays are as conscious as functional brains, which you deny.
>
> Hal

Is it really meaningful to speak of a brain as "passively running" anything. Ignoring the lovely little oxymoron, our mind is what a running brain does. I'm not sure a brain can be running, without a mind being there. (Brain is hereby distinguished from other collections of neurons, like the static neural net insects use or the adder those scientists in Georgia (?) built from leech neurons).

Further to the question of replays: We don't know if the universe is all predetermined. We could all be living in a reply. But, because we all thought the universe was proceeding causally the first time, we all think it is proceeding causally now, since it IS a replay, an exact copy ofthe original. This returns to one of Eliezer's original objections, which, as I understand it, was that

    A causal model brain is conscious.                (Layman's use of
"conscious". My personal theory)

A replay is usually a sequential display of static, predetermined data. (Reasonable def'n of replay)

A replay of a causal model is not conscious. ( Laymans' use of "replay", above)

There is no observer independant differnece between static data, static data being played back, and a causal model. (various threads here)

:.
There is no observer independant difference between consciousness and not consciousness.

But there does seem to be a difference between consciousness and not consciousness. My brain certainly seems qualtatively different form a lookup table/tree/matrix/Powers know what else.

This illustrates a fairly serious problem with accepting premise 3. At least in conjunction with the idea that conscious is causal (a macro level, as opposed to a micro level phenomenon. See my previous post.)

We can eliminate the dilemma by any combination of the following.

  1. A causal model brain is not conscious. (Consciousness is not computable).
  2. A replay is conscious. (Well, IT certainly thinks it is, but most people think it is wrong.)
  3. Changing the defintiion of replay.
  4. Coming up with a good model of instantiation.

Note that we only need one of these to negate the conclusion and thus come up with a good defintiion of consciousness.

Further to B: Replays can pass Turing tests administered by other minds within the same replay. This meshes with my post of several weeks back about ontological layers.

Darin Sunley
rsunley@escape.ca