Re: Qualia and the Galactic Loony Bin

hal@finney.org
Thu, 24 Jun 1999 18:40:14 -0700

Harvey Newstrom, <newstrom@newstaffinc.com>, writes:
> Hal <hal@finney.org> wrote:
> > The premise is that if you run a brain (or perhaps a brain simulation)
> > through an experience where you supply the inputs and it processes
> > the data normally, it will be conscious. I think you will agree with
> > that. We also have to assume that the brain can be made to run in
> > a deterministic way, without any physical randomness. This would be
> > more plausible in the case of a brain simulation but perhaps it could
> > be arranged with a real brain.
>
> I agree that a brain that processes inputs normally should be defined as
> conscious. I disagree that brains are deterministic and will respond to
> identical input with identical output every time. The internal state of the
> brain is not the same. If you override a brain's internal states to the
> point that they are controlled by external minds and not by the brain
> itself, I begin to doubt that the brain is conscious.

This is a good point, which Billy also brought up. We are asking for an exact replay here, but real brains probably can't be replayed in an exact way. I will say more about this below.

> I am not sure I agree with the assumption that the same input into the same
> brain will always produce the same results. Most theories of creativity
> involve randomizing factors in the brain. Maybe the left brain will always
> come up with the same response, but I believe that the right brain uses
> randomization to enhance creativity. You can force its random factors to
> replay as well, but you would have to do this at each step of the thought
> process. Suppose your input was, "name something Clinton hasn't screwed
> up"? Would the brain produce the same output every time? This question has
> no determinate answer. I think the creative randomizing brain would think
> up something different every time.

Theoretically, if we were in a deterministic universe, then conceivably you could set things up so that it would give the same answer each time. In practice, though, you are undoubtedly right that it would not be possible.

> > We now introduce the notion of making a recording of all the internal
> > activity of the brain during the first run, each neural firing or whatever
> > other information was appropriate depending on how the brain works.
> >
> > We then add a comparison during the second run of the activity generated
> > during that run with the recorded activity from the first run. Imagine
> > comparing this neural activity at each neuron. The two sets of data
> > will match, by the premises of the experiment, because we are exactly
> > repeating the first run.
>
> I don't think this is possible. Neurons fire trigger each other by
> releasing neurotransmitters into a liquid solution that exists between them.
> When the concentration of chemical gets high enough, those neurons that
> detect the concentration will fire. It would be impossible to make this
> chemical diffuse across a liquid medium in exactly the same way every time.
> Each molecule diffuses randomly. It will work roughly the same way, but its
> exact precision is indeterminate. The restoration of these chemicals to the
> neural stores are also random. The components to make the chemicals float
> around randomly in the blood. They are picked up randomly as needed by
> individual cells. There is no way that this supply and demand will always
> work out the same. Different neurons will be supplied slightly differently
> with each run. The only way to totally control the brain, which is 90%
> liquid, is to control every molecule as it bounces around in the liquid.

The problem I have with this line of objection is that it seems to depend very much on inessential properties of brains. If we base the entire objection to the experiment on the fact that brains have random elements and are made of liquid and so can't ever repeat the same calculation, then we must be assuming that these elements are *essential* for consciousness.

If it were possible to create a brain out of hardware which was not liquid and which was able to exactly repeat its calculations, then this objection would be irrelevant. Of course we don't know if such a brain could be created, but unless we are prepared to rule it out, it is not a secure foundation for rejecting the conclusions of the thought experiment.

In fact, the computationalist theories which I am trying to challenge here do in fact usually accept the notion that other kinds of systems than brains can be conscious. They do not usually say that there must be liquids involved. For example, there is a general belief that a computer could simulate a brain precisely enough that there would in fact be a consciousness associated with the computer's activity.

I don't know if you, Harvey, accept this notion, but if you do, wouldn't you agree that it is much easier to imagine a computer program which is able to run repeatably in exactly the same manner each time? Reset it to the same initial state (including any seed for a pseudo-random number generator), run with the same inputs, and you'll get the same output. If you accept that computers can be conscious, I don't think your objections which depend on the liquid nature of the brain fully deal with the issues posed by this thought experiment.

(I hope you will not think that in focussing now on a brain simulation running on a computer rather than an actual brain, I am changing the rules on you again. I am trying to zero in on the essential issues relating to consciousness, and I hope we will agree that the liquid nature of the brain is not essential to consciousness. Since your reply relied on that aspect I think it is best if we strip it away to get to the fundamental issues.)

> > Finally, at each neural element we substitute the recorded data for the
> > actively generated data. We block the actively generated signals
> > somehow, and replace them with the signals from the recorded first run.
> > Since these two signals are completely identical, it follows that there
> > will be no difference in the behavior of the brain. Substituting one
> > data value for another identical one is of no functional significance.
>
> This is the point where I am positive that the brain is not conscious. To
> do this, you must suppress all of the brains own neuron firing, and control
> every neuron externally. True, you can make the brain act like it would
> have anyway, but you can also make it act unnaturally. The
> control/consciousness of this brain is now in the hands of the programmer
> controlling each neuron, and not with the brain. The brain has become a
> meat puppet.

I did not mean to imply that we would suppress the brain's neural firing. The neurons still fire as they did before. What you do is to interrupt the information flow from one neuron to the next, substituting the (identical) data from the pre-recorded run.

Let us consider this in the context of a brain simulation, along the lines discussed above. When we have software running in a computer, we might have a statement like:

x = u*v + y*z;

where output variable x's value depends on the values of u, v, y and z; in the sense of information flow, x's new value is *caused* by the values of u, v, y and z. In our brain simulation we will have many instances of causation and information processing like this.

We can go through the same two steps I described earlier in transformin this conscious program into an unconscious one, by your criteria. First we run the program and record all the intermediate results:

	x = u*v + y*z;
	x_recorded = x;

Here we have a new variable x_recorded which will save the value of x at this point in the program. We would of course need a huge number of variables to do this everywhere in the program.

Then we run the program again, testing to make sure that the calculated values match the recorded ones:

	x_temporary = u*v + y*z;
	assert (x_temporary == x_recorded);
	x = x_temporary;

Here we perform the calculation u*v + y*z and store it in a temporary variable called x_temporary. We compare that against x_recorded and produce an error if they don't match. Assuming they do match (which they will, if the computer is working properly), we then store the temporary variable into x and proceed with the calculation.

We have not made any changes to the pattern of information flow here, we have merely introduced the step of comparing the recorded data with that which is dynamically generated.

We then perform the substitution I described above, where we will use the recorded data rather than the calculated data to proceed with the program. We change the above code to:

	x_temporary = u*v + y*z;
	assert (x_temporary == x_recorded);
	x = x_recorded;

which differs only in the last line. We have two variables, x_temporary and x_recorded, which are equal. In the previous case we assigned x's value from x_temporary, but now we will assign it from x_recorded, which is the same value. The result is that we have substituted one value for an identical one, which as I suggested earlier is arguably no substitution at all.

The net result is that each individual step of the brain simulation works as before, except that it takes its inputs from the recorded data (which of course matches the data actually generated in this run). So we have something which is essentially a passive replay. This means, in your model, that the brain is "dead" and not conscious. Yet all we did was to substitute, throughout the program, values for identical values. A 3 got turned to a 3, a 12000 got turned to a 12000, and so on. How can changing values into identical values make the difference between life and death?

Hal