Damien Broderick writes:
> At 06:08 PM 11/29/01 -0800, Hal wrote:
> >Would agents evolve to act as if their "successors" (after a backup and
> >restore cycle) were as important to them as future selves which had not
> >gone through such a change?
> Of *course*, since their own experience would feel as if it apodictically
> substantiated this viewpoint--they would recall the scan, then waking up
> and learning of the regrettable loss of an earlier chrysalis. So, noting
> that this felt just fine, they'd be more inclined to project an attitude of
> hope toward the next iteration, and so on, ever more strongly.
I was thinking more in terms of survival, replication and evolution
rather than how it would feel to the agents. We should be able to make
predictions about their actions based on what behaviors lead to maximal
This is presumably where our own sense of identity and desire for survival
came from. There is no a priori reason that brains have to try to stay
alive or to view their future selves as worth preserving. Animals which
held these views were more successful. This led to our instincts about
preservation of identity, appropriate for a world where mental replication
is not possible. We should be able to predict what new instincts about
identity will arise in a world where such replication can happen.
If backups and restores are possible, then in fact I think the
most favored behavior would be to make as many copies as possible.
Agents which adopted this behavior would tend to increase in numbers.
Of course it would depend on the costs involved but generally this would
be an attractive form of replication.
Furthermore once this was established we might expect agents to be
willing to sacrifice themselves if two or more copies were to benefit
sufficiently, like ants. We might interepret this as saying that
agents thought of their identity as encompassing the full set of their
If however we impose the limitation that this technology cannot be used
for replication but only for duplication (as in destructive teleportation)
then I imagine that agents would evolve to act as if their successors
were as valuable as their future selves. Adopting such behaviors would
increase the options available to an agent and improve its survival
prospects. Hence in such a world identity would come to include the
> This *doesn't* mean their fallible sense of conviction is *valid*, any more
> than a surviving soldier's faith in God supports the existence of this
> imaginary contrivance
There is no valid or invalid, it is all semantics and behavior. In asking
whether a duplicate is me, or has the same identity as me, we really
mean whether we should act as if the duplicate is me. And the test for
such questions is, ultimately, survival. We have our own beliefs and
mental structures based on what allowed our ancestors to survive. In a
new world with new survival properties, new truths will become necessary.
One could adopt a position today that he has a new identity every
instant, with no continuity from moment to moment. Such a view would
lead to actions which were random, and no doubt to death in short order.
In that way we can say that this view is wrong. There is a meaningful
sense in which we have an identity which persists over our lifetimes.
This is the empirical content of the question of identity. It is not
an empty philosophical question. Survival gives empirical meaning to
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:23 MDT