From: Robin Hanson (rhanson@gmu.edu)
Date: Fri Jun 20 2003 - 21:06:10 MDT
At 09:46 PM 6/20/2003 -0400, Eliezer S. Yudkowsky wrote:
>The problem is that the above is about ideal Bayesians who have not
>evolved to argue about morality, and it does not even begin to fit or
>explain human intuitions about morality, which are much more
>complicated. In the above scenario no one would ever argue about
>morality! It may even be questionable whether the "values" of the ideal
>Bayesians referred to above are like what humans think of as "values", any
>more than activation of a simple numerical reinforcement center is what a
>human thinks of as "fun".
I was following you, and agreeing with you, until you got to this
paragraph. It seems to me that ideal Bayesians can in fact argue
productively about morality, just as they can argue productively about
anything else. Of course they will not persistently disagree, and during
their argument they each could not predict the other person's next
opinion. But as long as in each possible state there can be a different
combination of what is morally right and what each agent wants, then morals
are just like any other matter of fact to argue about. (Of course humans
are not ideal Bayesians.)
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2.1.5 : Fri Jun 20 2003 - 21:16:01 MDT