Re: Whose business is it, anyway?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jan 09 2003 - 10:56:23 MST


Lee Corbin wrote:
>
> Oh, a great deal has been accomplished! The whole emphasis
> on when we should intervene has been changed, and, IMO will
> be found to be much more in accordance with practical and
> time-tested laws.

Um... the morality we're trying to maximize seems to me to occupy a
logically prior level to determining which laws are "time-tested", insofar
as morality would provide the fitness metric against which I determined
which particular laws are good or bad.

> Firstly, however, I note that your paragraph
> mentions nothing about the law---it's as if we are in some
> lawless space colonies, and we are only interested in examining
> what we *approve* of. But intervention is often an entirely
> different matter!

The law is an abstraction built up from the approvals and disapprovals of
individuals, including approval and disapproval of the particular class of
actions known as interventions.

>>Even if "Is it my business?" and "Is it BAD?" are different
>>intuitions, what is it that makes one more interesting than
>>the other?
>
> Simply put, it's all about intervention, as you say.
> Specifically, intervening in cases where it is none
> of the law's business. For example, were they brought
> back to life suddenly, Thomas Jefferson and John Adams
> would look at you incredulously were one to suggest
> that the government regulate whether you could burn
> a flag or a cross on your own premises.

Um... nice recursion there, but you still haven't answered why your own
"intervening in this case is none of the law's business" is in any wise
more interesting than a libertarian's "intervening in this case is BAD",
nor indeed why they are not equivalent viewpoints.

>>For me the archetypal example of a communicable moral case for third-party
>>intervention is where party A is attempting to kill nonconsenting party B.
>>In this case, I will, if I can, intervene to prevent A from killing B.
>
> Next time I'm in south central L.A., I will recall your
> advice, but I may more sensibly conclude that in many
> cases it's none of my business whether one gang-banger
> attacks another. Now an exception to this might arise
> if we pass laws making it illegal (shudder) for a
> bystander to fail to become involved. But do you have
> any examples that sound as though they might trouble me
> concerning my "we mind our own business" attitude? I
> already discussed Damien's fine example.

I didn't provide you with any kind of advice. I simply stated a personal
position of mine, that I will intervene if I can.

Incidentally, "if I can", in this case, excludes attacking a militarily
superior force; but I would readily call the police, thus applying force
by proxy. Which I would certainly never do if I considered the matter
"none of my business", whether it was illegal or not.

>>My own argument? I freely admit that a woman mutilating her baby is a
>>more complex case than a woman mutilating an actively protesting adult,
>>since it involves an attempt to extrapolate forward what the baby would
>>want,
>
> I suspect that whenever acting "just for someone's own good,
> whether they realize it or not", gets problematical beyond belief.

Yes, it gets very problematical very fast. It is still a problem that I
cannot avoid confronting where babies are involved. Perhaps you confuse
"acting for the good of an incomplete mind which lacks the cognitive
capability to realize X" with "claiming to act 'for the good of' a fully
adult entity who is actively objecting to it".

> No. How the laws are written, and hence who the police
> will side with, cannot be settled by moral argument.
> The far less idealistic approach, which has been shown
> to be vastly more workable, has been to observe in what
> ways successful societies seem to maximize benefit and
> prosperity by having enormous regard for individual
> citizen legal rights, and enormous regard for private
> property. To be sure, as Hayek explains, we need to
> be open to new experiments and ideas, however.

Maximize benefit and prosperity for what class of entities? If you aren't
counting babies and simulations in the tally, then your assessment of what
"works" is based on a quite different metric for workingness. I'm sure
that if you don't count slaves as people, then slaveowning societies can
be shown to maximize benefit and prosperity (for slaveowners) by having
enormous regard for "private property".

>>The same holds for a sentient in a simulation running on a computer you
>>suppose yourself to "own". You've gone on record on saying that it is
>>"none of your business" what someone does with "their" simulation.
>
> Yes.
>
>>I am just as much against ownership of a simulation as I would be
>>against the claim that you "owned" the proteins making up a sentient
>>you claimed was your "slave".
>
> Right. Our old argument. And one can see how it comes down
> to a difference in how societies of equals should function.

No, it comes down to a difference in who we consider an "equal". Lee, I
see absolutely no difference between claiming to own someone's proteins
and claiming to own their hardware. People are patterns in physics,
period. I don't care what level of abstraction you think they're on; to
me, they're just people.

> Do you acknowledge the harm of preventing people from running
> extremely numerous simulations, and thus granting run time to
> perhaps trillions of happy people, because of restrictive laws
> and interventions by outsiders? (Of course *I* admit that in
> a vanishingly few of these simulations, unspeakable horrors
> occur---but that's true in the hell branches of MWI already.)

An interesting question. I doubt I shall ever be confronted with it in
practice, as I see no moral purpose that would *ever* call for
"simulations" rather than "citizens" - the only thing you can do with a
"simulation", as you advocate that legal status, which cannot be done with
a "citizen", as I advocate that legal status, is violate the simulation's
volition. Why should there be trillions of simulations rather than
trillions of citizens?

But, given your strange hypothetical question, I guess my answer would be
that I don't really know given my current morality - it would depend
whether I thought the good of those trillions outweighed the hell
branches. If the universe is infinite, I might think that having a clean
universe with aleph-null happy people distributed several trillion to a
causal volume is better than a universe with aleph-null happy people and
aleph-null unhappy people distributed several quadrillion to a causal
volume, even if the relative proportion of unhappy to happy people is low.

Would you suddenly turn around and consider intervention to be your
business if the majority of simulations were shaping up as hell worlds? I
didn't think that was your argument at all. I thought it was just none of
your business.

Incidentally, "but that awful thing already happens under MWI" is a fully
general and rather silly argument; fully general in that it can be used to
excuse absolutely anything; rather silly, in that it wipes out all moral
differentials if it succeeds, but fails the instant you consider the
relative frequency of an event as morally relevant.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:35:51 MST