Re: Ethics as Science

From: Robin Hanson (rhanson@gmu.edu)
Date: Fri Mar 03 2000 - 13:51:32 MST


Dan Fabulich wrote:
> > I think this is just wrong. The difference between your beliefs now and at
> > the "end" of inquiry is made up of a bunch of little differences between
> > nearby points in time. Anything that informs you about your beliefs at any
> > future time implicitly informs you about your beliefs at the "end". I
> could
> > prove this in a Bayesian framework if you would find that informative.
>
>Go ahead. Right off the bat, I posit that you can't provide me with such
>a proof at all if I'm going to have an infinite number of logically
>independent thoughts about ethics (as at an Omega point, if any).
>Before you begin, however, take note of an interesting fact. ...
>The probability that I'll change my beliefs in light of what your
>computation tells me provides an upper limit on the certainty of your
>prediction; ... So its computation will have to take into account
>the result of its own computation before yielding its answer. ...
>But consider ANOTHER interesting fact. ... But if I AM going to
>change my beliefs, ..., then the computation
>can't tell me anything with any certainty whatsoever.

I don't understand what you mean by your Omega point assumption. But how
about we start with a simple example, and then see where it needs to be more
complicated, eh? Let's assume that in every possible state of the universe
you will eventually come to some "end" conclusion about ethics. Let's pick
a particular statement S about ethics, and say that you will in the end
either agree with it or not. More specifically, let's break the universe of
possible states in four disjoint subsets: A,B,C,D. In sets A and B you will
agree with S, and in sets C and D you will disagree with S. And at the
moment your prior over (A,B,C,D) is (.1,.2,.3.,4). So your probability
estimate for agreeing with S is 30%.

Now consider the factual question of whether human's ancestral tribes are
more like bonobo or baboon tribes. You realize that this matters for human
ethics. Bonobo is the answer in sets A and C, while baboons are the answer
in states B and D. Now if I just told you that the answer is bonobo, you
would update to a 25% chance of agreeing with S, while if I told you baboon
you would update to a 33% chance of agreeing with S.

Hopefully this has all been standard so far. Now we do the think you seem
to think is hard. There are these same sets A,B,C,D with the same prior and
the same relation to S, but we drop the bonobo/baboon interpretation of A,C
vs. B,D. Now A,C are the sets where I just tell you that you will now have
a 25% chance of agreeing with S, while B,D are the sets where I tell you its
a 33% chance. Upon hearing this you update your beliefs to be consistent
with the claim I just made. Ta da, I have fully taken into account the
effect of my statement on you.

> > Let's consider a physics analogy. [...] You want the mass of the
> > particular atom you have in mind, and no you aren't going to tell me
> > which one that is.
>
>This is a totally faulty analogy. Totally unlike this example, the
>question of what ethical beliefs I'm going to have is entirely well-posed
>and completely verifiable after the fact. (So long as we take
>functionalism to be right, and I do.)

I don't understand your objection. It seemed well-posed and verifiable
to me. I you want to be clearer about verifiability, assume that the
identity of the atom was encoded in a light beam sent off to another
star, and the signal would return in ten years.

Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030
703-993-2326 FAX: 703-993-2323



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:31 MDT