On Fri, 23 Mar 2001, Lee Corbin wrote:
> Here is what is happening: The Holodeck, like any good SI, mocks
> up a portrayal of Professor Moriarty for the benefit of the
> actual sapients watching. At this point, by definition, Moriarty
> has no consciousness or or feelings. You could, for example,
> attempt to do anything whatsoever to the Professor without
> even coming close to harming anyone.
Here is an interesting twist, based on some of my comments
regarding the need for robotic junk retreival systems and
the telepresence robots -- say the zombie 'body' or 'hologram'
is being remotely operated by a real human being (or an SI).
Now in this case you can do whatever you want to the body/holo
and you are not harming anyone. However humans are designed
to have empathy for things that look, walk, and talk like
humans (they are also designed to be wary of them until
a trust-bond is developed). So the tele-operator is going
to have to be 'tightly' wired to the remote body, such that
pain caused to the remote body will generate natural reactions
in the operator. [Presumably sensory safety devices cut
in at extreme levels.] Now, the operator is getting paid
quite well for performing this service and have freely
entered into this situation.
In this case is the remotely piloted body/holo a zombie?
> But then comes the transformation that you describe: the SI
> spawns the Professor Moriarty process, and now there is a
> separate, human-level sentience, with separate consciousness,
> feelings, etc.
Given that the SI is writing the code that 'operates' the
telepresence body/holo, I would argue that it can prevent
it from ever becoming self-conscious. You simply execute the
common standard behaviors based on similar situations and random
behavior selector strategies for the rarer situations (not too
different from your standard issue human as far as I can tell).
> As to whether the actual Moriarty consciousness
> (if we feel justified in talking about such a thing), or the
> actual Moriarty feelings (ditto) existed isomorphically in
> the complex SI is a question too deep to pursue in this email.
> (I believe the answer is, for all practical purposes, "no".)
But 'feelings' are gentico-socio-'thingys' (insert a word
here to represent a neurocomputational 'pattern') that are
designed to promote survival. There is no reason to elevate
them to significance (if that is what you are doing). They
are subroutines designed to promote behaviors that have
specific goal seeking strategies in the framework of the system.
> But why dignify that earlier mere portrayal with the term
> "zombie", which HAS ALWAYS MEANT an independent creature
> (not a puppet) whose behavior is identical but who lacks
> consciousness (which is impossible or nonsensical)?
Ah ha, so here a zombie cannot be 'tele-operated'. But
if a non-tele-operated creature does not have 'feelings' that
promote its survival, then its very rapidly a dead zombie.
The zombie movies would be much less interesting if they
were not trying to 'kill' humans (presumably motivated
by feelings). If they were just stumbling around in the
world, its simple -- "'Thunk', you no longer function."
But I don't buy the impossible/nonsensical part. With the
statistical correlation and fuzzy logic capabilities that
we now have, do you not think we could produce fully
functional unrecognizable zombies with no consciousness?
I think the AOL-Eliza example demonstrates that this
is feasible based on much simpler principles. The
interesting figure-of-merit is the time it takes
individuals of various IQs or educations to recognize
they are talking to a non-conscious entity.
You can be a zombie with zettabytes (10^21) of code that
says "In this situation, I say or do X". [For reference,
human memories appear to be able to retrieve several hundred
megabytes (~10^8) of information, though their 'recognition'
capacity may be greater.] That 'program' has no conscious
('feelings'?) that say 'I am self-aware'). It doesn't have
to run a self-rehersal program (which is what consciousness
is if you follow Calvin). It simply 'knows' the preprogrammed
responses to a huge number of situations.
Now, I've kind of lost track of the definitions of 'zombie'
in this message, but it seems clear to me that in the
classical sense -- a non-self-conscious or self-aware, but
Turing-test equivalent, human should be feasible. This
becomes further complicated by the tele-operation scenario.
There you are 'relating' to a real consciousness but one
that may be arbitrarily disconnected from the pleasures or
pain experienced in life. I.e. they can dial up or down
the motivational 'feelings'. The degree to which you
perceive them as a 'zombie' depends in large part on
their acting abilities. They may dial down the stimulation
vectors, but know how they should react in specific
circumstances and so you have no way of knowing whether
or not its 'real' or a remotely directed experience.
Robert
Robert
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:42 MDT