Re: Qualia and the Galactic Loony Bin

Max More (
Sat, 19 Jun 1999 11:20:56 -0700

At 06:56 PM 6/18/99 -0700, Hal wrote:
>It would seem to follow, then, that the entire enterprise has been
>a folly. Either all possible mental states are existing all the time
>just due to random neural firings in disconnected brains all over the

Hans Moravec takes this view even further and believes that minds exist everywhere given a clever enough interpretation. You don't even need neurons to have a mind. He ends up giving up any need to upload, falling into a Platonic black hole.

>So, what do you think? Were they producing mental states by stimulating
>those neurons? And if so, are they still produced when they just stand
>their and let their brains do the work of firing neurons?

Without the appropriate causal connections (which I'm going to avoid even trying to specify), I think there cannot be mental states or qualia. Some simulations will not produce qualia (such as those that used a gigantic look-up table to simulate the input-output relations--though I doubt that such a means of simulation would be workable). It may be hard to say where in Zuboff's story, consciousness is really lost, but I think it happens when causal connections are lost and the firings merely simulated. Since I believe that mental states are processes embodied in physical structures and that the causal interaction of the underlying physical processes matters, a mere simulation that ignores those causal relations will not have mental states.

I don't have any developed view of how far an emulation of brain processes can stray from preserving the original causal relations without losing qualia. I see no reason to think that replacing biological neurons with synthetic neurons with the same connections would be a problem. (So I can't agree with Searle.) I'm not sure whether I would be willing to upload into a computer that ran a simulation of my mind in serial manner, though I wouldn't worry if the computer was a true artificial neural network (rather than just simulating one as most do today).

That still leaves a large gray area that I'm not sure about -- I don't know whether the ANN needs to emulate each neuron so that the causal relations between the processes in each neuron are preserved. Maybe emulating groups of neurons (where the processes in the computer differ significantly from the processes in the original neurons) would be fine. Probably much of the biological detail is irrelevant. But just how far up the scale of cognitive function you go before losing important causal relations I don't know. I think we may be able to answer that question with more confidence when understand much more about how awareness arises in the brain.


Max More, Ph.D.
<> or <>

Implications of Advanced Technologies
President, Extropy Institute: EXTRO 4 Conference: Biotech Futures. See