Re: Identity

Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Sun, 22 Nov 1998 18:59:16 +0100

Void where inhibited writes:

> But despite the fact that these two objects (dos, duo, not 1 or 3; four is
> right out!) are identical, they are not the "same object." They are:
> *two* *indiscernable* objects.

Of course there are two of them. Nobody denied that.

> To say that two indiscernable objects are really the SAME object, we'd
> have to imagine some metaphysical object off in Plato's heaven, an object
> of which the two things we see are mere instantiations. Even then,
> however, they'd still be *two* instantiations of one thing.

Of course they are two different objects, but their state is the same. As to question of consciousness, only rabid Penrosians would claim that fine structure of quantum levels is of any relevance here. Indistinguishability of two objects in the same state is merely a convenient, rigorous argument against doubting thomases who disagree on general principles. Since the brain is a very noisy system any talk about quantum levels is entirely ridiculous. Furthermore the Hamiltonian of neural dynamics (atomic 'clock' roughly 1 ms, psychological clock about 100 ms) is strongly divergent: the system is very chaotic. Cloned objects bifurcate instantly.

We're talking so glibly about making an instant copy of a macroscopic object without disturbing the state of it. If you do so passively, you need infinite (and I mean it) time spans, if you do it actively you turn the object into a nice ball of hot quark-gluon plasma, or a cloud of logics. In any case will the original object be destroyed. But of course you can make several copies, albeit far from being in the same state. So their initial positions in state space are distinct, their inputs are different, and as you can't freeze time at a given frame these objects start to diverge while they are being constructed, and watch my mouth run.

Let's talk about minds instead. You can't clone the physical structure for above reasons, but you could do a

  1. gradual upload 1.1 destructive 1.2 nondestructive
  2. destructive post-mortem upload

A destructive gradual upload a la Moravec starts with one system, and ends with one system. It is not cloning, but incremental migration of cognition to a neosubstrate. Nondestructive upload permeates your brain with a hires grid of probes, nanocomputers, communication and energetic infrastructure, making it swell to twice or thrice its original volume, and consume kWs and rivers of coolant. You use all this apparatus to observe your mind spatiotemporal dynamics for a long time, creating a computational model from it (it's much harder if you have to do it hands-off, i.e. passively). After that, you reverse the monstrous nanooedema, refill the liquor, close the skull, skalp, regrow your hair (optional, in some people). Even if you manage to maintain identical sensory input (nice trick, that) because of simulation inaccuracies and intrinsic system noise the systems will have bifurcated almost instantly. Destructive post-mortem upload makes a molecular map of your vitrified cerebrum, destroying it in the process -- so it's not cloning minds either.

The only area where talking about minds in the same state is meaningful is when all of them are discrete deterministic simulation under our control. You can control their initial state, their input, and page back and forth along its trajectory. You're God, in short. (Dull job, but somebody has to do it).

The difficult questions begin here where you want to prove that two systems encoded differently are very similiar, or nearly identical (of course 'identical' is the unattainable limit of 'similiar'). You can't compare the person pattern, it's not identical since in a different encoding. And because of that you also can't simply compare the dumped trajectory. Furthermore, even if the systems differ infinitesimally (less than two alternative universes with two 'gene's differing in a single neuron), they diverge progressively. Panta rhei. The only way how one can establish that, would seem to make a lot of reruns a la amnesiastic Groundhog Day, starting with the same state, but varying input and statistically analyzing output.

Actually, the latter problem is not academical: you have to solve a similiar task when you start with an upload at some low level (MD, compartmental model, whatever) obtained from your molecular resolution map and want to encode it more compactly/efficiently for a given hardware while assuring equivalency. Of course you can circumvent the metaphysics of it by substituting blocks incrementally in the course of the emulation. If the patient doesn't mind...

ciao,
'gene