From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Thu Mar 06 2003 - 01:04:59 MST
----- Original Message -----
From: "Wei Dai" <weidai@weidai.com>
To: <extropians@extropy.org>
Sent: Wednesday, March 05, 2003 8:05 PM
Subject: Re: Spacetime/Inflation/Civilizations
> On Wed, Mar 05, 2003 at 12:39:05PM -0800, Hal Finney wrote:
> > One of the paradoxes that I have always struggled with is this: if you
> > run exactly the same conscious program twice, does it matter? Does it
> > increase the "measure" or "probability" of that conscious experience?
> > Do I do good by re-running someone's pleasant experience, and harm by
> > re-running a bad one?
>
> When I try to think about this question, it always comes back to: how do I
> know I do good by running (or helping to run) someone's pleasant
> experience the first time?
### Ask him. I tend to think that only a well-informed decision is a good
decision, and the only entity rightfully capable of answering ethical
questions pertaining to that entity is the entity itself, if the entity has
an opinion on the question, otherwise the ethical element of the question is
moot (which, as an aside, is the reason why e.g. snails are not ethical
subjects). If the person agrees with the experience, then it is good to run
it. The same applies to simulations and reruns. If it doesn't want a rerun,
you shouldn't do it. This is a direct application of the autonomy principle.
Somehow I feel that infinite multiplicity doesn't affect this conclusion.
While it is nice to know that an infinite number of me's will make it into
the infinite future, all decisions we make will have local consequences and
these are important independently of everything else. It can be argued that
the Bayesian egoist and the Platonic altruist (to use Eliezer's terminology)
are equivalent.
Rafal
This archive was generated by hypermail 2.1.5 : Thu Mar 06 2003 - 01:09:46 MST