Re: Is cryopreservation a solution?

Hagbard Celine (hagbard@ix.netcom.com)
Thu, 11 Sep 1997 13:38:52 -0400


Anders Sandberg wrote:

> Holism can work the other way too: an almost correctly put together
> brain would self-organize to a state very similar to the original
> person as memories, personality, chemical gradients etc relaxed
> towards a consistent state.

It seems that some outside intervention would be necessary in this case.
Is there any precedent for cellular self-organization? I mean to say, if
the neurons are simply in the wrong place, what would make them move
about spontaneously to their identity-creating positions? Perhaps after
having reconstructed the brain it would then be possible to move neurons
about, trial-and-error, based upon where activity is occurring and where
it is not, but otherwise, it seems against the second law of
thermodynamics.

> It is funny to notice that the idea of holism is in some sense
> deeply conservative. It claims that we can (and must) understand
> the whole system without looking at the parts, and that attempts
> to do so anyway will fail. The odd thing is that top-down explanations
> (which are holistic) so far has not been very successful in helping
> us understand the mind, the body and the universe despite centuries
> of thinking, while bottom-up explanations of reductionism have worked
> quite nicely even if they are limited.

I am not a strict holist by any stretch. But, when from a materialist
standpoint one attempts to use biology to explain a non-biological, or
as-yet philosophical concept, holism becomes useful.

In the case of identity, this utility arises because the sum of our
biological parts yields a perceptually non-physical abstraction -- human
identity. That is, common-sense would suggest that the sum of our
biological parts will always be a biological machine. But in the case of
identity, this is not true. Identity is not a part of the machine. You
can't pick up, hold, or replace your identity in any physical sense.
Identity exists above the biological machine, yet also affects it from
that same meta-level.

So holism is useful in this scenario only insofar as it offers a
tentative explaination for identity based upon biology -- that is,
identity arises when all the pieces are locked into a specific place,
and in a specific way, birthing a more-than-biological thing.

What would be the reductionist explaination for identity? Or for that
matter, what are the other ways of explaining it?


> Personally I see no problem with having complex and abstract
> properties emerging from low-level systems, they do it all the
> time. As I think it was Carl Sagan said: "there is no such
> thing as *mere* matter"!. The simplest systems demonstrate
> complex behaviors based on simple internal interactions; we
> can choose on what level we want to study them, but it is often
> easier to deal with the lower levels first and then deduce
> how they interact to produce higher levels.

Of course. But, I am arguing that the higher levels (more complex,
indeed) are still biological. How does one deduce the biological
interactions that produce an abstract identity? For that matter, what
sorts of abstract properties do you know of that have emerged from
non-abstract low-level systems? And are not these abstract properties
more than the sum of the non-abstract system's parts?

Correct me here, if necessary. It would seem you are suggesting that a
pattern exists in the neural network of the brain, which if mapped,
would allow us to fully repair a partially-reconstructed brain,
including the pre-existing identity. I don't know enough neuroscience to
comment, but what if there is no pattern? What if every neuron must be
placed where it was in the original? This will severely limit our
ability to repair the brain, and lends credence to a holistic view of
identity since there is no systemic level between the neural arrangement
and identity.

One consequence of a bottom-up definition of identity is the increased
likelihood that we will have the ability to alter identity as we wish.
(Ah, autopotency...) I like the prospects of that, although what happens
when you make a change to your identity that makes you more likely to
want to change your identity in a way that makes you more likely to
change your identity? Hofstadter fans take note.

>
> > This is not to say that our understanding of the brain will never reach
> > the point where we can "fudge" things a little. But, one neuron
> > incorrectly arranged may have little effect on identity -- say one step
> > on the identity continuum. Two neurons incorrectly arranged -- two
> > steps. Would three neurons incorrectly arranged have only a three-step
> > effect? Or would it be four steps? Or six? What about four neurons? An
> > exponential, geometric or one-to-one effect on identity?
>
> That depends on how you measure geometry, what local metric you use
> in identity-space (see Sasha's excellent paper about it, on his website).
> My guess is that the effect will depend a lot on which neurons are
> erroneous, some are more important than the others. My experience
> from neural networks suggests that they tend to be fairly stable as
> you disrupt them until a certain point, and then they quickly break
> down. So the identity left after a certain number of fudges
> would look like this:
>
> --------- 100% identity
> \
> |
> |
> |
> |
> |
> \
> ------------ Zero identity
> ---Neural Change-------->
>
> This suggests that small errors are completely insignificant.
> The big question is where the breakdown happens; we know far too
> little to be able to say for certain. A guess is that it corresponds
> to a fraction of the total cell number, and given what we know about
> dementia, I would guess that you could probably have around 0.1-1%
> neural change and still be safely yourself (with a possible
> performance decrement).

Hmmm. Reanimation has a very low margin of error then based upon the
state of biostasis technology today. Pre-freeze cellular deterioration
may make-up even more neuron damage than that, don't you think? In the
absence of a pattern to work from in reconstruction, I don't give
today's cryonics customers much of a chance at being themselves.

Your above point does make good sense in that evolution is likely to
have installed some built-in redundancy within the network to avoid
total incapacitation after brain trauma. What are your thoughts on the
reasons for the stability of neural networks? Is there actually a
reorganization to "carry the load," so-to-speak, like you mentioned
above? Or can mere redundancy explain the bulk of it?

Interesting stuff,

Hagbard