John K Clark (
Sat, 7 Feb 1998 23:09:02 -0800 (PST)


On Fri, 6 Feb 1998 Brent Allsop <> Wrote:

>In order to know that we use the same red, you must objectively know
>what and why red is or what objective physical phenomenon produces a
>red sensation in our consciousness and be able to objectively
>observe that particular phenomenon going on in your brain and my

And objectively observe that observation of that particular phenomenon going
on in your brain really is objective, and objectively observe that the
observation of the observation of that particular phenomenon going on in your
brain really is objective, and objectively observe that ...

We'll never get anywhere with that approach, we need to use a different
tactic and we have one, behavior.

>What the fundamental physical representation is like is very
>important to consciousness and our ability to process information
>and consciously "know" about things.

As you say this can have practical considerations, but I can find absolutely
no evidence that the way information is represented has any philosophical
ramifications at all.

>When there is a dot of red qualia in a field of green in our
>conscious representation of a strawberry patch, the fundamental
>nature of this red qualia, what it is like, how it is fundamentally
>different than a green qualia, how it stands out, and how it is
>represented in an isomorphically similar 3D space in our mind is a
>key part of the computation process that allows us to be aware of
>and select the ripe strawberry that is ready to be picked.

Exactly, good qualia makes our behavior smarter and unlike consciousness,
intelligence can be tested for. They've improved a lot in the last few years
but even now no machine can do a job like picking ripe strawberry's as well
as a human, so I think it's safe to say that a machine's qualia of red is not
as rich as a human's. Yet.

>>To exist information must be in a context, otherwise a hole in a
>>paper tape is not data it's just a hole, but there is nothing unique
>>about the human mind in that regard.

>No. This is only when talking about abstract information.

I don't understand your objection, all information is abstract.

>It is when you try to represent that oxygen atom with wholes in
>paper tape that requires the mapping, context, and interpretation.

And a human brain can put a hole in a tape in the context of other holes and
so obtain information from that tape, but obviously machines can do this too,
that's why punched tape was invented and that's why feeding such a tape into
a machine does not produce random output.

>Red is like red, there is no other meaning required.

I agree, some things can not be reduced further, and I can give another
example. Intelligence produces consciousness, no explanation why is required
or possible.

>Searls's Chinese Room

Searls's Chinese Room is a foolish thought experiment that teaches us nothing.

>proves is that any physical phenomenon, no matter how absurdly
>inefficient, can abstractly represent or model any other physical
>phenomenon, including conscious qualia, and can eventually duplicate
>the output. This has nothing to do with the fundamental nature of

Nothing to do with it?! I think it has everything to do with it, good thing
too because that's all you'll ever know for sure about the subjective
experience of another person or machine. If I'm wrong about consciousness
then random mutation and natural selection would never have bothered to come
up with it, but I know for sure that it managed to produce it at least one
time because I'm conscious, I have a strong hunch you are too.

I'll never be able to prove it, but I think the best way to make progress in
this area is to just forget about consciousness and concentrate on
intelligence, because you'll get consciousness anyway and free of charge.
That's the tactic Evolution took, in fact nature found that intelligence was
a much harder nut to crack than consciousness. The emotional parts of our
brains are very old, little different from that found in many other animals,
the higher brain functions, like logic and simulating the future so we can
make plans, operate in much more recently evolved parts of the brain.
Conscienceless is a dime a dozen, intelligence is not, If nature found one
to be much harder to produce than the other, why would the opposite to be
true when we design our machines?

John K Clark

Version: 2.6.i