RE: The copy paradox

Hal Finney (hal@rain.org)
Thu, 13 Nov 1997 09:45:29 -0800


Brent Allsop, <allsop@swttools.fc.hp.com>, writes:
> Leevi Marttila <lm+extropians@sip.fi> responded:
> > Actually look up table is too simplistic expression. How would you
> > describe trained artificial neural net semantically?
>
> The same way abstract computer simulations describe or model
> them. There is no more phenomenal information in any abstract
> computer model than there is in what we can abstractly say to each
> other. As I said, you can abstractly model everything with most
> anything including speech and internal computer models, but any
> abstract model like speech, still, is only abstractly like it. It
> isn't fundamentally like a real feeling or phenomenal sensation.

The question is, what constitutes an "abstract" model versus a real one?
Is it a matter of whether the underlying substrate is silicon versus
protein? Or is it a matter of the internal form and structure of the
model?

I believe Leevi was taking the latter position, and arguing that a
sufficiently complicated model, even if running on silicon, would no
longer be abstract. Things would acquire meaning not because of arbitrary
assignments (this register holds saltiness, that register holds blueness)
but due to the immense complexity of the interactions among the various
representations.

Saltiness could only be described by a complex relationship among
millions or billions of simulated neurons with various activation
levels, and likewise for other qualia.

In such a system there is no possibility to change redness into blueness
without affecting anything else. Redness and blueness are so complex,
so interrelated with other concepts, that it would be impossible to
disentangle them from each other.

> Also, neurons in our eyes and optical nerve learn how to
> abstractly represent and forward modeling information to the visual
> cortex. But, we are not phenomenally conscious of this information
> until it arrives at the visual cortex where the phenomenal conscious
> representations are produced. There is something more going on in the
> neurons of the primary visual cortex than the abstract stuff that is
> going on in the subconsciousness like the retina neurons. I would bet
> that any neural net of today is still, like the neurons of the retina,
> not yet producing phenomenal sensations. We haven't yet discovered
> precisely what this phenomenal process is and how it is formed into a
> unified and emotional awareness and why it is like what it is like.

It is indeed an interesting question at what point in the neural
processing our neurons begin to affect consciousness. It's conceivable
that there is no boundary, that it is a gradual transition.

You think the retina is purely abstract, just pre-processing and
conditioning the data before it is presented to the _real_ conscious
neural network somewhere in the cortex. But even there the cortext just
does more processing. The initial layers are known to look for certain
features in the input, like lines and edges. Certain neurons will fire
when they "see" lines with specific orientations. Other neurons look
for simple patterns of movement, like vertically moving edges.

We can easily imagine that this kind of feature analysis would be useful
in beginning to understand the structure of an image. But this is still
very mechanical and, as you would say, abstract. Presumably it is just
a preliminary stage as well, to be passed to a still deeper level of
processing, and perhaps that is where consciousness will begin.

But the alternative is that we're peeling an onion, and when we get down
to the center, there's nothing there. All those layers we discarded as
doing purely abstract calculations were collectively creating the very
consciousness that we were looking for. By this point of view, even the
mechanical calculations of the retina can be said to be a part of our
conscious experience of vision. It plays a structural and causal role in
the image processing which we call a sense of sight.

Hal