RE: The copy paradox

Brent Allsop (
Thu, 13 Nov 1997 13:18:26 -0700

Hal Finney <> asked:

> The question is, what constitutes an "abstract" model versus a real
> one? Is it a matter of whether the underlying substrate is silicon
> versus protein? Or is it a matter of the internal form and
> structure of the model?

Silicon is fundamentally real silicon and protein is
fundamentally real protein. The two can computationally or abstractly
model each other, producing identical computational output or
behavior, but at the fundamental layer they are not anything like each
other. When one is formed in a way to model the behavior of the other
it is a mere "abstract" behavior model and not fundamentally like the
real thing.

> I believe Leevi was taking the latter position, and arguing that a
> sufficiently complicated model, even if running on silicon, would no
> longer be abstract.

Sure you can model any behavior given sufficient computational
complexity but at what point would the fundamental nature suddenly
become the same as the model? Fundamentally, silicone is still
silicone. It can never magically suddenly cross a sufficiently
complex boundary and become fundamentally really like protein even
though the model can precisely model, abstractly, that quality. At
the fundamental level they are different.

> Things would acquire meaning not because of arbitrary assignments
> (this register holds saltiness, that register holds blueness) but
> due to the immense complexity of the interactions among the various
> representations.

You can't chase the fundamental problem away just by adding
enough complexity. Sure the "meaning" or abstract behavior can be
anything you want. A very simple and trivial silicone color detecting
machine has plenty of complexity to tell me what color something is
even better than I can tell what color something is. But, the
representations of color in this simple machine, though far more
complex, are fundamentally very different than the phenomenal
representations I use to represent color. The particular fundamental
nature of the color detecting machine representations aren't relevant.
They can be states of a transistor, voltages on a wire, or flux
orientation on magnetic media... The fundamental nature is not
relevant to the computation in this simple yet more accurate and
complex than me machine. But for me, the fundamental nature of green,
and how it is phenomenally different from red, is what enables me to
produce the different words green and red. To me, the fundamental
nature of the representation and what it is like is all important.

> Redness and blueness are so complex, so interrelated with other
> concepts, that it would be impossible to disentangle them from each
> other.

I disagree. Red is fundamentally and simply red and blue is
fundamentally and simply blue. The two are not like each other and
nothing else is subjectively the same as them. There is no tangling
at all. Just because you can have complex relations between them and
many other things doesn't change their non varying simple and
fundamentally different nature.

You described the relevance of neural processing in the retina
and argued that it all might gradually become conscious. Again, I

True, all the subconscious preprocessing is very essential to
extracting the information so that the required 3D information can be
extracted from the stereo 2D images which enables our 3D awareness to
be built out of qualia in the conscious space of our visual cortex.
The green tree you are aware of and think is beyond your eyes, is
really in your visual cortex and only beyond your phenomenal conscious
model of your eyes which is also, merely a model of your eyes in your
brain. Everything you are aware of is simply a conscious model of the
reality which is beyond your senses. There is no gradual about it.
One is the source of the stimuli and the other is the final result of
much complex sensing and processing. They are both at the opposite
ends of the very complex cause and effect process. One is in your
brain and the other is beyond your senses. One is conscious, the
other is not.

There is no color, smell, sound, warmth... or pain beyond our
senses; only the electromagnetic radiation, chemical content,
acoustical vibrations, kinetic energy of molecules... and bodily
damage our brains merely arbitrarily represent with such phenomenon.
Each of these things can very successfully model the other, but they
are not really the other and are only abstract models. Red really is
nothing like 700nm electromagnetic radiation, but it abstractly models
it well in the conscious world of our awareness inside our brain.

When we peal away the layers or separate the parts we see a
complex machine composed of very fundamentally real and distinct parts
at the fundamental layer. Some of these parts abstractly work at
extracting the 3D info contained in the abstract subconscious 2D
stereo images contained on the retina. This causal process eventually
constructs a glorious 3D model of reality out of phenomenal qualia.
This model built out of qualia is our conscious visual awareness and
is in our brain not beyond our eye. The 2 2d images on the retina are
clearly not conscious at all. But the resulting 3d awareness clearly
is and is obviously built out of something. What this something is
like is fundamentally the quality of consciousness. Nothing abstract
can be quite fundamentally like it.

Brent Allsop