From: Hal Finney (hal@finney.org)
Date: Wed Mar 05 2003 - 20:12:24 MST
Wei Dai writes:
> On Wed, Mar 05, 2003 at 12:39:05PM -0800, Hal Finney wrote:
> > Do I do good by re-running someone's pleasant experience, and harm by
> > re-running a bad one?
>
> When I try to think about this question, it always comes back to: how do I
> know I do good by running (or helping to run) someone's pleasant
> experience the first time? Sure, some of us have a moral intuition that
> tells us it's good, but this intution is not shared universally (some
> people don't think pleasure or even happiness is inherently good) and it
> does not help in the case of re-runs anyway. If we had a set of
> self-evident moral axioms which allow us to derive as a logical conclusion
> that running the original pleasant experience is good, we could use it to
> reason about the value of re-runs, but unfortunately we don't.
But it sounds like you would have to extend this sort of agnosticism to
even relatively unproblematic moral cases like saving someone's life,
or relieving pain, or granting the last wish of a dying child? We don't
have certain knowledge that any of these things are good to do, but I
think there would be almost 100% agreement that these are in fact good.
It seems to me that taking such an extremely skeptical position has
some of the same problems as solipsism: it can't be refuted, but it's
not useful in practice.
I would rather start with the idea that it's good to do good, so to speak,
and then try to extend that to the question of whether it's good to re-run
good experiences. Cutting off the inquiry at the beginning by saying
that we don't know if anything is good isn't going to help with this part
of the problem, although it may be a valid perspective in its own way.
The paper that is the subject of this thread,
http://xxx.lanl.gov/pdf/physics/0302071, discusses some of the issues
relating to moral action in a multiverse. I don't agree with most of
their analysis; in particular, they assume the possibility that "you" are
exactly one instance in the multiverse and therefore that "your" actions
affect only "you", which neglects the inherent multiplicity of each
individual. I also think that they have moved too quickly to reasoning
based on having infinite numbers of copies who take each alternative
course of action, without recognizing that there are probabilities
involved and that your decisions can affect those probabilities.
So I would take a very different approach to this problem, but still I
think that the general idea of considering ethical issues in the context
of the multiverse is a valid and interesting question. And it seems to
me that focusing on Tegmark's level 1 multiverse provides a particularly
simple and tractable framework for thinking about multiplicities.
Hal
This archive was generated by hypermail 2.1.5 : Wed Mar 05 2003 - 20:18:58 MST