Re: AI motivations & Self-rewiring [was Re: technophobes]

Robert J. Bradbury (
Sat, 23 Oct 1999 23:52:18 -0700 (PDT)

On Sat, 23 Oct 1999, phil osborn wrote:

> This whole discussion is a good example of how ignorant even most extropians
> are re the actual nature of consciousness, unfortunately.

Phil, I'm not sure if you are refering to my comments or others so I'm going to leave this aside and simply state that "consciousness" is a very "fuzzy" concept in the minds of most pepople. I believe it gets even fuzzier when you try to distinguish between consciousness (something I think many people have) and self-awareness (something that is much less tangible in the average individual).

> Consciousness builds itself from successful feedback, as in thumb against
> forefinger in the womb.

I'm unsure here whether you are discussing consciousness (CNS) or self-awareness (SA) at a very low level. The example seems to me to involve more SA, than CNS. In the womb you have insufficient experience to be conscious of your thoughts, though you may have some self-awareness of your perceptions.

> Consciousness is not and cannot be a program or any set of
> symbolic manipulations separate from sensory data - and action.

This is a assertion. But the words in it are sufficiently ill-defined (e.g. consciousness, program, symbolic manipulations, action) that I cannot agree or disagree with it.

I would argue that the "consciousness" we attribute to most human beings allows us to predict a set of actions based on a set of sensory data. As such it is a program. The program might have a wide range for the error-bars/degrees-of-freedom, but as a generalization I would say most "conscious" behavior is predictable.

... The individual puts his hand on a red-hot burner on a stove. ... The individual *will* remove his hand very quickly.

Those are simplistic I realize.

... The individual will fall in love with someone fitting their

images of a desirable mate.
... The individual will remain with that "someone" through very

difficult periods until it becomes very clear that their surivival may depend on leaving that mate.

More complex behaviors & predictions simply involve a more detailed understanding of the algorithms involved.

> Fortunately, as Bart Kosko pointed out today on the "Digital Village," it
> may not matter, as the process of uploading, and also, very possibly,
> producing the next generation of SI's, can just as likely happen
> incrementally.

Yes, I believe that this will be the case.

> The Boomers will be demanding medical solutions to failing
> organs, including brain, retina, etc. The solutions already in the works
> at places like UCI involve creating chip replacements/enhancements on a
> piecemeal basis. Thus, the transition to uploading may never involve
> attempting to move consciousness in toto onto another platform from
> meatware.

True. The incremental approach may happen on the slippery slope. If so, that is probably good since it makes it more difficult for the luddites to stop it.

The problem is not in "moving consciousness", but lies in the realm of whether or not consiousness (or self-awareness, with motivation for survival/replicatoin) can be created and allowed to propogate freely.