Re: Emulation vs. Simulation

From: Lee Corbin (lcorbin@ricochet.net)
Date: Mon Mar 26 2001 - 00:38:38 MST


Robert J. Bradbury wrote

> You will have to split hairs here -- is a statistical response behavioral
> unit an 'emulation'? If you say yes, then I would argue that it does not
> have everything we do. I would argue a statistical response behavioral
> unit is much closer to a 'portrayal'. I would also argue that few humans
have
> the necessary tools to distinguish between an emulation and a portrayal.

The whole question is, how plausible really is the existence of a
statistical response unit, and how well could it survive on its own?

> At some level of technological sophistication, I can statistically 'emulate'
> human behaviors (i.e. given these inputs produce those outputs) and the
humans
> would not have a clue that they were dealing with a zombie vs. a real
person.
> You *DO NOT* need consciousness or feelings. You simply need to
replicate the
> most common responses in those situations. That is something that is
entirely
> programmable.

I contend that these "common responses" are inadequate for survival in
the real world. To be very concrete, let us ask what is the minimal
programmable unit that could (a) hold down a job at Microsoft or Sun,
(b) drive to work every day, (c) shop and do all the mundane things
(e.g., fill out drivers license applications) necessary to 21st century
existence?

Forget Silicon Valley: even if it was possible in *ancient Sumeria*
for someone to do what it took to survive back there, why wasn't
there a series of natural mutations that got rid of all the excess
baggage like consciousness, feelings, etc.?

The answer is that it's not possible. It, like so many other
programming projects, only seems feasible. The first AI that
would be capable of getting along in Silicon Valley (or ancient
Sumeria) would be almost exactly as conscious and feeling as
humans are. There just aren't any shortcuts (or nature, for one,
would have found them).

(By the way, before I forget, by "portrayal" I mean "a puppet", ...if
you get the analogy. Picture the smug SI with his metaphorical hand
running a (metaphorical) puppet in front of some people, and being
very mildly amused at how easily they are fooled, and the way that they
think his puppet to be the greatest human being since Jesus Christ.)

Now we get down to the deep part (in my opinion):

> You can be a zombie with zettabytes (10^21) of code that
> says "In this situation, I say or do X". [For reference,
> human memories appear to be able to retrieve several hundred
> megabytes (~10^8) of information, though their 'recognition'
> capacity may be greater.] That 'program' has no conscious
> ('feelings'?) that say 'I am self-aware'). It doesn't have
> to run a self-rehersal program (which is what consciousness
> is if you follow Calvin). It simply 'knows' the preprogrammed
> responses to a huge number of situations.

Okay, suppose that we have a creature (that I still don't want
to call a zombie) which has 10^x times as much storage capability
as a human being, and it merely does a lookup for everything that
could conceivably happen to it. It never really calculates
anything. I will officially concede that if x is a big enough
number, then the entity is not conscious, and therefore is a
zombie.

BUT!!! I think that 10^x is so big that it is completely out
of the question that anyone would ever want to build a creature
with 10^x times as much storage as a human being, merely to
imitate a trivial human being. Moreover, you might have to
run a human being or emulation of one (there's no difference)
in order to generate the table. And in that case, it can be
argued that when you ask the "zombie" a question, you are in
effect asking the original emulation the question.

(Incidently, I stopped calling myself a functionalist a few
years back exactly because of this question of lookups: for,
just as you say, technically speaking there can be (absurdly
large) lookup tables that indeed act conscious, but are not.)

But where does this leave us? In my opinion, we have these

cases:

A. A puppet or apparition run by a remote SI
B. A self-contained entity with a fantastically large lookup table
C. A self-contained entity with capabilities of human or above,
   but which calculates its behavior (doesn't look it up)
D. A self-contained entity that is like C, but isn't conscious

If A, then the question is moot, since the entity doesn't
really exist. If B, the entity is not conscious, but is
probably just the standing record of an earlier run of a
conscious entity. If C, then the entity is conscious. If
D, then it's not smart enough to survive in a challenging
environment (unlike a dog). Only case B might be considered
a zombie, but that case is not what people are talking about
on this list (until your post).

So therefore, any creature in your immediate vicinity is either
a creature with a big lookup table that, because the table isn't
big enough, still cannot survive on its own---or, it engages in
calculation, and is really no more efficient about it than we
are, and so therefore is conscious. Either way, in a realistic
scenario, we don't have "zombies". (P.S. I'm not even sure that
10^21 actions would be enough for a human---remember, it has to
ask human IN EVERY POSSIBLE situation, which might mean millions
of separate emulations had to go into the lookup table.)

As a slighly relevant aside, I hope you believe that Searle's
Chinese Room is conscious, intelligent, and has feelings!?

Lee Corbin



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:43 MDT