Re: Is cryopreservation a solution?

Anders Sandberg (
11 Sep 1997 10:59:27 +0200

Hagbard Celine <> writes:

> I tend to lean towards a holistic neuron arrangement explaination. If
> the arrangment of neurons in your brain defines your identity, it is
> only because each and every neuron is in a particular place, which, as a
> consequence, births something more than the sum of the physical
> components (a particular identity). That is, a squishy mess of neurons
> put together based upon an incomplete picture of the *entire*
> arrangement would not yield anything more than a squishy mess of
> neurons.

Holism can work the other way too: an almost correctly put together
brain would self-organize to a state very similar to the original
person as memories, personality, chemical gradients etc relaxed
towards a consistent state.

> Reductionist study of the brain may be helpful to understand
> biology, but I would hazard that it is insufficient to understand
> something so complex (and unfortunately, abstract) as identity.

It is funny to notice that the idea of holism is in some sense
deeply conservative. It claims that we can (and must) understand
the whole system without looking at the parts, and that attempts
to do so anyway will fail. The odd thing is that top-down explanations
(which are holistic) so far has not been very successful in helping
us understand the mind, the body and the universe despite centuries
of thinking, while bottom-up explanations of reductionism have worked
quite nicely even if they are limited.

Personally I see no problem with having complex and abstract
properties emerging from low-level systems, they do it all the
time. As I think it was Carl Sagan said: "there is no such
thing as *mere* matter"!. The simplest systems demonstrate
complex behaviors based on simple internal interactions; we
can choose on what level we want to study them, but it is often
easier to deal with the lower levels first and then deduce
how they interact to produce higher levels.

> This is not to say that our understanding of the brain will never reach
> the point where we can "fudge" things a little. But, one neuron
> incorrectly arranged may have little effect on identity -- say one step
> on the identity continuum. Two neurons incorrectly arranged -- two
> steps. Would three neurons incorrectly arranged have only a three-step
> effect? Or would it be four steps? Or six? What about four neurons? An
> exponential, geometric or one-to-one effect on identity?

That depends on how you measure geometry, what local metric you use
in identity-space (see Sasha's excellent paper about it, on his website).
My guess is that the effect will depend a lot on which neurons are
erroneous, some are more important than the others. My experience
from neural networks suggests that they tend to be fairly stable as
you disrupt them until a certain point, and then they quickly break
down. So the identity left after a certain number of fudges
would look like this:

--------- 100% identity
------------ Zero identity
---Neural Change-------->

This suggests that small errors are completely insignificant.
The big question is where the breakdown happens; we know far too
little to be able to say for certain. A guess is that it corresponds
to a fraction of the total cell number, and given what we know about
dementia, I would guess that you could probably have around 0.1-1%
neural change and still be safely yourself (with a possible
performance decrement).

Anders Sandberg                                      Towards Ascension!                  
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y