Re: Engaging Bioethics

From: Robin Hanson (rhanson@gmu.edu)
Date: Fri Mar 02 2001 - 15:20:08 MST


Eliezer Yudkowsky wrote:
> > To appear, Summer 2002, Social Philosophy & Policy 19(2).
> > Current version at http://hanson.gmu.edu/bioerror.pdf or .ps
>
>This is a really fascinating paper... Is the paper likely to stick
>around on your website, and is it available for linkery and/or citation?

Thanks! And yup.

>sound like the sort of thing a maturing AI would formulate to learn
>and quantify human ethics.

But I *am* a maturing AI. :-)

>Are there any moderately technical works you would care to recommend
>on the subject?

As you might guess, I'm not as familiar with these things as a professional
philosopher would be. If by "technical" you mean math & formalism, I
doubt there are any. If you mean the professionals going into more detail,
I cited the more detailed works I know of. Some are on JSTOR I think.

>Such moral intuitions are commonly considered to be
>especially likely to be in error, all else equal, through some complex
>process of self-deception."
>... A slave-owner who believes that slaves
>cannot be trusted with freedom is not deceiving himself; he is rather
>being deceived by evolution - he is making cognitive errors which have
>adaptive value (for him).

I don't really care which words we use to describe these concepts; I
mainly care that we can communicate and that we make useful distinctions.
The phrase "self-deception" now has several connotations, and I think my
usage is within the usual range, but I agree it would be nice to make
more distinctions.

The slave owner is deceiving himself if in some sense "he should know
better." That is, if he has cognitive tools to uncover the gene
deception but seems especially reluctant to use them. I want to learn
a lot more about self-deception - especially warning signs regarding.

>This presumes that the female of the species is seeking health, wealth,
>and intelligence as declarative goals, rather than responding to cues
>which were adaptive cues in the ancestral environment;

Yes - that is my point - we can't substitute other signals because the
choice of the existing ones is hard-coded in some ways.

>Health care is not just a signal for loyalty because of its sparse
>temporal distribution, but because of its context-insensitivity. ...
>sends a signal to nearby observers that the carer
>is someone who can be relied on to remain allied even under extreme
>circumstances.

Yes, I had this in mind when I said "hard times", but I could be clearer.

>unconditional ally is substantially more valuable than a conditional ally;

Well its a bit more complicated - we want it to be conditional on some
things and unconditional on others. We expect and want feelings to be
conditional on betrayal, for example.

>"If we think of status as having many good allies, then you want them to
>act as if they were sure to be of high status."
>
>Why?

That follows from the few sentences before it. You might prefer my
longer presentation, with math, in http://hanson.gmu.edu/showcare.pdf

>But the main thing I'm objecting to is that you went *way* too fast
>in that paragraph and totally lost me.

Gotcha. Will try to expand.

>Health makes someone a more valuable ally. Happiness may or may not.

As stated, this isn't clear to me. "happiness" is presumably just a proxy
for other more fundamental things one can invest in. It is not clear that
health is more useful to allies than the typical other things.

>It looks to me like it's just standard paternalism, ...
>I see no domain specificity for health care;

But empirically we so much more paternalism in health areas than in other
policy areas.

>... as far as I can tell, advocacy of NHI is generated by the same set
>of causes that generate advocacy of Welfare. ...
>To be specific, both Welfare and NHI derive
>argumentary force from our intuitions about how to treat tribal members
>who are the victims of unpredictable, major, temporally sparse
>catastrophes.

The urge to nationalize is much stronger in health care than in most
industries. Your alternative explanation is not that far from mine.

Thanks for taking me seriously enough to read and react!

Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:39 MDT