Re: Can Pure Lookup Tables Be Conscious?

From: hal@finney.org
Date: Wed Apr 25 2001 - 16:59:13 MDT


I don't have time to respond in detail right now to Lee's message
on lookup tables and consciousness. I want to make two quick points
though:

> It is now easy and instructive to visualize such "playback" on
> a Life Board. Rather than the next generation being actively
> computed, we may simply have an automatic arm reach into a
> sequenced table of gels, each with bright spots in fluorescent
> paint, and place the next gel in sequence on the Life Board.
> This gives every bit the appearance of a civilization developing,
> or of an entity having an experience, but in reality has no
> content. Indeed, the lights are on, but no one is home.

I think this is a good description of the "playback problem" in
consciousness. As I wrote earlier, I view it as a harder problem than
the more basic question of whether a computer can be conscious at all.

My first point is that among mainstream philosophers of consciousness,
the general view is that such a machine would not be conscious.
David Chalmers, Hilary Putman and others have characterized
the materialist or functionalist claim as being that a machine
which properly implements a particular program would be conscious.
They then focus their attention on what "implementation" means.
It turns out that this is a much slipperier concept than one might at
first suppose. Here is quote from one of Chalmers' papers on the topic
at: http://www.u.arizona.edu/~chalmers/papers/computation.html:

"Justifying the role of computation requires analysis of implementation,
the nexus between abstract computations and concrete physical systems. I
give such an analysis, based on the idea that a system implements a
computation if the causal structure of the system mirrors the formal
structure of the computation."

The point is, then, that even if we agree that a particular abstract
program would be conscious if properly implemented, we then are faced
with the question of whether a particular system implements it. And the
criterion which Chalmers lays out is that the implementation must mirror
the formal structure of the computation. Generally this means that it
must deal not only with a particular set of data, but with any possible
set of data, in the same way that the abstract program would.

The playback machine does not reach this level of fidelity. It only
plays back a single record. It cannot deal with variations in the
input, and it does not explore the many possible branches that a real
implementation would. So by the analysis which is generally followed
by philosophers of consciousness, playback machines are not conscious.

(Note though that this objection does not apply to the huge lookup table,
as that does represent the full complexity of the abstract program.
There may be other reasons to doubt the consciousness of the HLT but
this is not one of them.)

My second point is general skepticism of the notion that consciousness can
be localized. I will speak mostly in terms of spatial localization here,
but if we come to doubt the meaningfulness of localizing consciousness
spatially, then perhaps temporal localization will be seen as doubtful
as well. In that case the whole question of "is X conscious" can be
seen as meaningless.

At first it may seem obvious that consciousness is localized. Our
consciousness is clearly a property of our brains (or at least our nervous
systems) and is therefore confined to our heads (or at least our bodies).

Okay, but now imagine yourself as a microscopic observer, smaller than a
neural cell, moving about in someone else's brain. Where exactly is the
consciousness? Is it present within the neurons but absent outside them?
Is it a field which suffuses the entire brain? What about the edge
of the brain, does consciousness end suddenly or does it decay over
some distance? Similarly, if consciousness is in a computer, what part
of the computer? The CPU? The power supply? The memory?

If consciousness truly has a location, we ought to be able to answer
these questions. Yet to me it seems doubtful that they are meaningful.
We can't see consciousness or measure it in any way. It's not like
an electric field or a fluid density. There is no way to create a
consciousness meter which we can move around and measure the strength
of the consciousness field.

I think much of our perception about the locality of consciousness is an
illusion caused by the cluster of sense organs in the head. It is clear
that if our brains were removed to some distance, but somehow remained
attached to our bodies via radio transducers on the nerves, we would
feel no different. We would still "feel" like our consciousness was in
our head, even though our brains are in a vat somewhere.

So I would distrust instinct on this question. We have to work purely
on the basis of evidence. And since there is no way of measuring the
location of consciousness, I suggest that it is not a meaningful concept.
Putting x,y,z,t coordinates on consciousness doesn't make sense.

Hal



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:56 MDT