Re: Cryonics and uploading as leaps of faith?

From: Brett Paatsch (paatschb@optusnet.com.au)
Date: Sun Jun 29 2003 - 23:46:41 MDT

  • Next message: Spike: "RE: //HUMER// FWD (TLC-Mission) 3rd Annual Nigerian Email Conference"

    Lee Corbin writes:

    > Brett writes
    >
    > > I'm thinking functional equality is *possibly* not the
    > > same as identity. The contrary view appears to me to
    > > be that it *necessarily* is.
    >
    > You are correct. Functional equality is not *necessarily*
    > the same as survival-identity.

    [Guilio]
    > > > I feel like me because I remember the things that
    > > > I remember (including what I donīt consciously
    > > > remember at this moment) or, in other words, because
    > > > of the specific information coded in my brain. This is I
    > > > believe the simplest explanation.
    > >
    > > I think this is just a restatement of the "there is no possible
    > > difference" case. I'll grant the duplicated you would feel
    > > like they were you. I won't grant that they *are* you just
    > > because they feel like they are.

    [Lee]
    > Just to clarify my position, I do agree with Giulio here.
    .
    > > I will grant that to me and everyone relating to you the
    > > duplicate will be satisfactory. I am more "selfish" when
    > > it comes to me.
    >
    > Absolutely! This is the *crux* of the real philosophical
    > problem! We ought to posit the existence of a Most Selfish
    > Individual, and ask to what well-informed decisions he
    > would come.

    There may be merit in that. I may actually know this guy :-)
    Let me take a shot at describing him.

    The Most Selfish Individual's well-informed decision can only
    be as good as his knowledge base. He is presumably rational
    and open to learning better survival options, but knows, he is
    not yet fully informed. He is "jealous" of his mental sovereignty
    and will not accept as proven that which he doesn't understand.
    He will make judgements on imperfect information, if he has to,
    and he'll probably go for the cryonics procedure or upload as
    a last resort, (if he doesn't delay too long). But short of that,
    he'll probably try and replace all the organs in his body except
    his brain and CNS and try to learn as much as possible about
    the brain with the extra time purchased by the non-critical to
    self organ replacements. In this way he hopes that any uploading
    that will be done on him, will be done on the basis of as much
    information and understanding of the notions of 'consciousness'
    and 'self' and the modularity of living brains as possible. Any
    'leap of faith' this Most Selfish Individual takes will be as *small*
    and late as he can make them.

    > > I don't care whether you and my duplicate and everyone else
    > > in the world agrees that .. the transformation that produces my
    > > duplicate has produce[d] me, I *care* that *I* am not *sure*
    > > that it is so beforehand.
    >
    .
    > But we are talking about a survival-oriented, selfish view of
    > what is "in it" for an individual.

    *I* perceive the world from a stand-point that is necessarily
    self-centred. *My* senses collect data and *my* brain interprets
    it as information. My "self", a process which I understand emerges
    along with consciousness has amongst its attributes a desire for
    "self" preservation. Its a pretty strong desire and my worldview
    rightly or wrong informs *my* self about what seem to be more
    and less optimal ways to pursue that desire. Personally, I don't
    find altruism a particularly useful concept, except as a shorthand,
    and I'm not sure it has anything to do with *my* desire for self
    preservation one way or the other.

    > > > Yes; in every way your uploaded copy---or even you if you
    > > > are disintegrated at 10am tomorrow morning and then instantly
    > > > re-integrated using the same or different atoms---will have
    > > > this same impression. It could, even now, be happening a
    > > > hundred times a second.
    > >
    > > When you say "could" what are you basing your view on?
    >
    > Suppose that there existed a machine that could decompose
    > you into your constituent elements within a microsecond.
    .
    > Then suppose that a few microseconds later, it reconstitutes
    > you. Now suppose that this entire process happens hundreds
    > of times per second. There we are.
    >
    > And this is NOT the place to inquire after the actual physical
    > plausibility of such a device. Yes, our notions of who and
    > what we are do depend on our basic understandings of
    > physics, but whether or not a certain technical breakthrough
    > that is not conceptually significant ever occurs or not is moot.

    Ok. Stipulations noted. If I was to discover that this had in fact
    been happening my worldview would necessarily change. I might
    conclude that continuity cannot be a necessary precondition for
    self-hood, as I have come to experience it, because I hadn't ever
    had continuity, I'd have been mistaken. But I think I've missed
    your point as I think I am repeating something I said earlier here.

    > ... the question is once an adult level of sentience is achieved
    > > can you capture the recursion counters in the wetware? A
    > > snapshot of the conscious process and memories and restore
    > > it either onto an identical wetware substrate or a different set
    > > of firmware on an upload. I don't know.
    >
    > Why not?

    Are you asking why don't I know?

    Because I don't *know* enough about how my consciousness
    and the experience of self-hood manifests to assume that it can
    persist completely decoupled from a matter substrate for any
    length of time. My current thinking is no substrate means no
    conscious processing (or unconscious processing either). No
    consciousness process means no self concept process. In short
    I assume that no brain means a discontinuation of me because it
    seems prudent to do so.

    > Are you suggesting that it will *never* be possible to
    > root out every last property of the human brain?

    No, I'm not suggesting that, not the human brain in a generic
    sense, though it might be so. It is likely that one cannot contain
    a fully detailed working model of one own brain in one's
    consciousness.

    > And what about the easy way: a copy is made by a
    > nanotechnological device that simply gets all the atoms?

    What about it?

    Is this the scenario where life and consciousness spring
    back into the new structure at the instant of its reassembly
    because they were an inherent part of it? In my experience
    brains grow they aren't top down or bottom up assembled.
    How do you bring about that instant in wetware when all
    the pieces come together at once? This would require
    an extraordinary feat not just of construction but of
    coordinating the synchronous assembly of perishable
    components in real time and bringing them together in that
    instant that is the deadline. Doesn't seem "easy".

    > As for uploading, we will have to
    > wait to see what level of sophistication is truly required.

    Or further inquire and explore, perhaps an understanding of
    consciousness and the self and of the modularity of brain
    processes will reduce the requirement for, or the magnitude
    of, any leaps of faith.

    > It may be that stripping off one neuron at a time, noting its
    > connections, and making sure that the machine implements an
    > equivalent architecture is sufficient.
    >
    > In *this* particular case, I do believe that functional
    > equivalence is adequate to guarantee that the consciousness,
    > intelligence, etc., is the same---i.e., it feels that its you just
    > the way you do, it is a difference without a difference, etc.

    To it, it's a difference without a difference. And you are
    unlikely to be complaining ;-).

    - Brett Paatsch



    This archive was generated by hypermail 2.1.5 : Sun Jun 29 2003 - 23:54:19 MDT