RE: Duplicates are Selves

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Apr 05 2003 - 14:04:10 MST

  • Next message: Robert J. Bradbury: "RE: Duplicates are Selves"

    On Sat, 5 Apr 2003, Harvey Newstrom commenting on my comments on copies wrote:

    (Oh Harvey -- you raise a host of complex issues which is why I suppose
    I hang out on this list). Its going to be a toss-up today whether you
    or Anders get the prize for creating the most worthwhile stuff to talk
    about (though he was delivering abstracts so its not clearly an original
    work product).

    First,
    > What happens to one copy may not happen to the other copy.

    Oh but you can't assume this. If I can make 1000 copies of RJB
    I think I might be quite tempted to produce a collective mind
    (it is essential to beat the hazard function -- repeat after me
    "distributed replicated intelligence"). The logical step beyond
    that is a highly interconnected "distributed replicated intelligence".

    Once one can do this all of the information and experiences become "shared".
    Now this becomes somewhat problematic since if there are 1000 copies
    of me, someone somewhere is probably experiencing an orgasm with
    a fair amount of frequency and that is going to make it rather
    difficult for the other minds in the collective to get some
    productive work done. So some filtering seems likely. Then one
    gets into whether the sender or the receiver is doing the filtering.
    Very complex problems.

    > As such, they each have a different point-of-view.

    Only limited by the degree to which they are restricting input.
    An "individual" can break itself off from the "collective" but
    as soon as it does that it is dooming itself to mortality.
    No way out of this box (at least that I've seen yet).

    > Making a duplicate copy does not change the point-of-view of
    > the original.

    Hard to say -- it depends on ones "beliefs" with respect to
    the original. If I'm reading/recalling the postings correctly
    I think Harvey/Lee might assert this but Damien might not.
    This comes up in "The Saga of the Cuckoo" -- ones perspective
    on cost-free copying becomes a little jaded if one suspects
    the fate of the copies isn't very pretty.

    That is a key issue that Lee and I (and others) need to focus
    on if we get this teleportation paper written.

    > The original point-of-view still grow old and dies or is destroyed
    > in the destructive copy scenario.

    No. Provided one doesn't create more copies faster than your
    information capacity expands there is no reason to lose any
    old/original "point-of-views" -- it gets into precisely the
    problem that many people face today with their computers --
    ("Which of these files do I no longer need any more and can
    be deleted?").

    (I'm assuming the "destructive" copy isn't really "destructive" --
    after all if you can reassemble a copy of the original someplace
    else you can reassemble the original where they were disassembled
    and that if you can do this disassembly/reassembly that the
    concepts of growing old and death are obsolete.)

    > That point-of-view never is modified to achieve immortality. It is only the
    > newly created point-of-view that experiences immortality.

    This seems to assume there is never a collective intelligence
    that does not need to be copied/teleported to achieve "immortality".

    > It may "remember" being mortal in that we have programmed it with a
    > copy of all the memories from the original, but it never actually
    > experienced them.

    Oh, now I think you are treading on very thin ice -- attempting to
    differentiate "memories" from "experiences". If the molecular
    reassembly is "identical" how can these be differentiated (other
    than in the mind if one happens to know one is a replicant?).

    An experience *is* a memory I don't see one escaping from that easily.

    > One is mortal and is never saved.

    If one does not form a distributed replicated intelligence one is
    mortal. End of discussion.

    > One is immortal and was never in danger.

    Yes, but there are *different* forms of distributed replicated
    intelligence(s) that one may adopt -- this has significant affects
    on how one thinks, acts, accumulates knowledge, experiences, etc.

    The dominant choices may significantly effect the direction of the
    evolution of our civilization. They are not trivial choices.

    Robert



    This archive was generated by hypermail 2.1.5 : Sat Apr 05 2003 - 14:15:53 MST