Re: distributed AI was Re: computronium prime-oxide

Hal Finney (hal@rain.org)
Thu, 19 Nov 1998 16:56:26 -0800

Paul Hughes wrote:

> Having experimented quite a bit with float tanks, the brain certainly
> seems to have a need to continue to process information, whether of an
> external or internal nature. John Lilly recommend the tank experience
> for that reason - it allowed the user to process internal information that
> they would otherwise not be able to do amongst environmental distractions.
> In my experience, solutions to several problems I was working on in
> frustration, revealed themselves while I was in the tank.

Of course you would agree that you remained conscious. The point is, Dennett appeared to be denying that a Turing machine could be conscious unless it was interacting with its environment. It doesn't make much sense for this to be the case with a Turing machine when it seems not to apply to people.

I might add that it seems that it should be possible to engineer resistance to hallucinations into an AI. Just because people eventually begin to spin their mental wheels when deprived of the traction of sensory experience doesn't imply that all beings would be that way. I am skeptical that there is an iron law of consciousness that says that all brains would behave this way when deprived of input.

Michael Lorrey wrote:

> I dunno. Think of a fetus. Do you know of any cases whatsoever where the fetus
> was aware and thinking while in the womb, which is a remarkable sensory
> deprivation chamber. A human being does not really become aware until at least
> a few weeks to a few months after being born. They need time to learn how to
> sort out all of the sensory imput into a rational format.

I had the impression that the question was whether consciousness requires present-time interaction with the environment, not whether there was interaction at any time in the past. The latter would be consistent with Turing-machine functionalism, as I understand it. The question is, can you take a program and stick it in a Turing machine, and thereby make it conscious.

The source of the program is irrelevant for answering this question. If any program has the property that running it on a Turing machine produces consciousness, then Turing functionalism is true. If it turns out that the only way to make such a program is by running an interaction with the environment, fine; likewise for any other way of creating the program. The source doesn't matter. All that matters, for this question, is whether running a program can produce consciousness.

Hal