Conscious of the hard problem

O'Regan, Emlyn (Emlyn.ORegan@actew.com.au)
Tue, 22 Jun 1999 15:20:41 +1000

An initial thought about Hal's post under the subject of "RE: Qualia and the Galactic Loony Bin" (as the post is already long anyway, and only makes sense in the context of the original post, I have included the entire original post at the end for reference):

I agree with previous postors that the premise that the disembodied brain is conscious seems intuitively incorrect. It does logically lead to the conclusion that every possible brain (or brainstate) exists all of the time, by accumulating the correct set of neurons (judged by their firings) and imposing an (articifial) topology. Yet I'm not ready to dismiss this idea completely. Who is to judge what is artificial? I also hope that no one here is leaning on intuition as the defining metric for the worth of an idea.

How about this: Say the people with neurons in their nutrient baths all have internet connections. They have three chat forms - a "master" form, an "input" form, and an "output" form.

Someone (the grand poobah) has initially recorded the state of connectivity, and activation between neurons in the brain of the great one, at a moment shortly preceeding the great one's demise. The grand also has a list of who has which neuron. The poobah can send messages to every citizen's "master" form (on an individual basis).

Initially, the poobah sends each citizen a message in turn, with consists of list of citizens who should be connected to that citizen's output form. This list includes a "weighting" for each connection (integer?). The message also contains a threshold level for that citizen. The weightings and threshold are derived from the values for the citizen's neuron and prior connections in the recorded brain state in each case.

As each citizen receives the personalised message, he/she sets up his/her output form to be connected to the input form of each person on the list. The connection is one-way.

After the poobah has sent all messages, and all citizens have successfully completed the connections, each citizen X has forms set up as follows:

Let Neuron X be the neuron for Citizen X:

Where Neuron X had input connections from outputs of Neurons Y, now the input form for Citizen X is connected to the output forms for Citizens Y.

Similarly, where Neuron X had output connections to inputs of Neurons Z, now the output form for Citizen X is connected to the input forms for Citizens Z.

So now that the brain's connectivity is modelled, the poobah sends out messages, again personalised, instructing each citizen of his/her initial activation and that of all inputs (an integer) when the poohbah says go. These activations are of course taken from the brain-state snapshot. Then, he sends a global "Go!" message.

Each citizen X has instructions which tell the X how to interpret input messages, how to calculate X's level of activation based on these messages and known connection weightings, and how to re-adjust weightings based on activation level changes. X also knows to put a message in the output form when activation is above the threshold, and when activation falls below the threshold.

The other thing that X knows to do when activation occurs is to stimulate the neuron in the bath.

The other thing that the citizens realise is important is for the Great One to have an environment to interact with. So, one is modelled in computer simulation, including a virtual body. There are citizens who control neurons which had inputs in the original brain connected to output from non-neurons - those citizens are now sent messages by the computer simulation. Similarly, those whose neuron's neuronal outputs would have gone to non-neurons now send their output messages to the sim. An attempt (very successful) is made to initialise the simulated environment to the exact surroundings of the Great One at the point in time when the Great One's brain state was captured. The sim is of course replete with simulated others for the great one to interact with. Someone remembers to subtly alter the initial state of the sim so that the circumstances that crought about the Great One's demise do not occur! Finally, it is recognised that the citizens cannot communicate with the speed that the Great One's neurons would have communicated, so the sim is slowed down to compensate.

The grand experiment begins...


This scenario looks a lot more like uploading than does Hal's. But from the point of view of a citizen, it is very similar. The list of instructions for firing, rather than being a monolithic, static list, is instead created dynamically. Does this make a difference?

Suppose that the community decided that the system was too fallible. The internet is too flaky; if the connection is down, each citizen must still be able to stimulate his/her neuron appropriately.

It is decided to pre-generate lists of firings to get around the communication problem. So, the sim is altered to add the physical workings of the brain to the virtual body, and the sim is run, recording all firings as the simulated Great One "executes" in the sim. In a mere few nanoseconds of computer time, the Great Sim lives for millions of years of subjective time (hey, it's the future), finally succumbing to simulated death. The citizens are sad about the knowledge that the great one will finally die, but they decide that a millions of years are a pretty good run, and so accept the fact.

The Grand Poobah generates lists of instructions for each citizen - and what a list! Each one covers millions of years of firings. Single humans wont be able to carry out the firings for each neuron without chance of failure.

Luckily, the galactic civilisation is immense (billions of planets), and there is now no need for communications as the lists of firings have been generated. The Sim is turned off, and the lists are distributed, along with their respective neurons, one to a planet. Each list item includes an exact time to fire (in Universal Standard Time, subtract half an hour for those planets in Universal Daylight Savings Time). On each planet a Council of the Neuron is appointed, and a foolproof scheme for heritary succession is devised. The councils parcel out the responsibility into shifts, and the planets begin. Neurons are stimulated in the exact patterns given, and the continuing life of the Great One extends for billions of years across a billion star systems.

(My apologies if the model of a neural network above is incorrect. The correct rules could be substituted into the above without changing the point.)

---

Except that here again, one may observe that the firings are not connected
in some sense, and that a subset of the neurons in all the brains (or the
twinkling of a changing subset of stars, or the timing of a selection of
raindrops hitting ground on a million worlds) are exactly the same as the
firings in that original brain would have been, had it been connected as
originally shown, or had the citizens been linked up as in the original
scenario above. So where is consciousness?

Consciousness was in the initial brain of the great one, by assumption. If
there is nothing special about neurons, then you could replace them with
nanobots and the result should still be conscious. The network of citizens
communicating as neurons could also be conscious, an analogous replacement
of neurons to that of  nanobots (perhaps regardless of the activation of the
neurons themselves).

The simulated brain should then be conscious. Replacing the citizens with
analogous software objects should make no more difference than the
transition from neurons to nanobots.

Then the activation of the individual neurons from pre-generated lists could
also be conscious. After all, it is exactly isomorphic to the simulated
virtual brain. 
the fact that a pattern of activations in a brain has occured before should
not imply that subsequent repetition of the pattern would not be conscious.

We have two scenarios above. Scenario 1 is where people (or nanobots or
tinkertoys) model neurons and connectivity, and "run the system" by
communicating with each other to produce the desired activations. Scenario 2
is where the activations are pre-generated, and played out by the neuron
model. You may accept scenario 1 (uploading) and reject scenario 2.

Say that Scenarios 1 and 2 occured in seperate, parallel universes
(universes 1 and 2). But in universe 2, after the lists of activations etc
are generated, the poohbah decides not to bother sending them out to the
planets. Instead, the poobah travels to universe 1, immediately prior to the
start of the scenario. He takes over the internet (he is very clever.
Imagine how clever the Great one must be).

Now Scenario 1' begins, and the local Poobah begins communications. But no
communications are getting through to their targets, either from the poobah
or anyone else. Instead, the Poobah from universe 2 trashes all such
communications, and sends false signals to the sender to acknowledge receipt
from the receiver.

Then, Poobah 2 sends out fake signals of his own. He has fed all his lists
into a computer, and gets it to send fake set up messages from Poobah 1, and
fake activation messages from the correct sending citizens to the correct
receiving citizens, with timing as dictated by the lists.

To the citizens and poobah 1, everything seems fine - the messages are
exactly as they would have expected. Remember that the lists were generated
from a (deterministic) sim of Scenario 1, so there are no anomolous
messages. Each faked message replaces exactly one identical, real message.
Scenario 1 is identical from the point of view of poobar1 and the citizens
to Scenario 1'. Yet scenario 1 is really an instance of Scenario 2, the only
difference being that the citizens know that they are working from a list in
Scenario 2.

Above, I said that if the original brain was conscious, then Scenario 1
could be conscious, except if the physical presence of neurons or some other
original physical thing is necessary. And now we have that Scenario 2 is
conscious if Scenario 1 is conscious, except if the intention of the
neurons/replacements matters (and how can that concept even make sense?).

I guess I am coming to the conclusion that consciousness is emergent, and
that there may be infinitely many planes of conciousness in our universe,
each based in an isomorphism of the physical world, our own isomorphism
being just one such.

But that's bloody ridiculous.

Damn!

Emlyn


Hal's original post:

> One of the articles in the collection The Mind's I, edited by Hofstadter
> and Dennett, has always struck me as posing difficult problems relating
> to instantiation and playback questions. It is The Story of a Brain, by
> Arnold Zuboff. I will summarize a portion here. This is a simplification
> of Zuboff's arguments but I think catches the main idea.
>
> A man greatly admired by his society has died. However in order to show
> their gratitude and love, his people have taken it upon themselves to
> preserve his brain and induce pleasant experiences in it. They believe
> that by stimulating his neurons in appropriate patterns they are able
> to create the corresponding conscious experiences.
>
> At first this is just done by stimulating the sensory inputs to the brain,
> but over time the brain is separated into parts and the various inputs to
> each part are stimulated separately. Since everyone in the society wants
> to participate in this act of devotion, eventually it gets to where each
> person is responsible for just one neuron of the original brain.
>
> The neurons sit in small neutrient baths, and when it is time for a new
> experience to be delivered, each person receives instructions for the
> timing of how they are to stimulate their neuron. At the appropriate
> moment, each person delivers the specified patterns of stimulation all
> over the world, and the neurons go through the same patterns of activity
> which they would have if they were actually in the man's brain when he
> was experiencing some pleasurable activity. It is still thought that
> by doing this they actually bring about the corresponding mental state.
>
> Now it happens one day that just when a new experience-delivery is about
> to begin, one person finds that his neuron has died. He knows that this
> won't affect the overall experience, since neurons die all the time and
> we can't tell if there are a few more or less. But he is personally
> disappointed because he knows that he will not be able to play his own
> small part in delivering the happy experience.
>
> Then he gets an idea. His own brain is full of neurons as well, firing
> all the time. At the appropriate time, he moves the neural bath out of
> the way and bends over and puts his own brain into the position where
> "his" neuron is normally kept. Since his own brain is active, he is
> sure to have a neuron fire which is in the right place and at the right
> time for each of the stimulations he is supposed to give. In this way
> he can participate in delivering the experience using the neurons in
> his head rather than the one in the bath.
>
> But then he thinks, why bend over? It doesn't matter where the neurons
> are located, all that matters is the pattern of their stimulation.
> And then he thinks, what about all the other people who are stimulating
> their own neurons? They have brains too, full of neurons just like his.
> Any time they were supposed to be stimulating their neuron in its bath,
> they had neurons in their head which were firing at exactly that moment.
> If all that is necessary to produce a conscious experience is to have
> neurons fire at the specified times and places, there is no need for
> anyone to stimulate anything. Just by standing there, their own brains
> provide more than enough neural firings to produce any neural pattern
> (and therefore any mental experience) imaginable.
>
> It would seem to follow, then, that the entire enterprise has been
> a folly. Either all possible mental states are existing all the time
> just due to random neural firings in disconnected brains all over the
> world, or else these carefully planned stimulations, which were designed
> to mimic actual neural patterns in a conscious brain, were not actually
> producing any mental states.
>
> So, what do you think? Were they producing mental states by stimulating
> those neurons? And if so, are they still produced when they just stand
> their and let their brains do the work of firing neurons?
>
> Hal
>