From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 20 2003 - 21:52:02 MDT
Robin Hanson wrote:
> At 09:46 PM 6/20/2003 -0400, Eliezer S. Yudkowsky wrote:
>
>> The problem is that the above is about ideal Bayesians who have not
>> evolved to argue about morality, and it does not even begin to fit or
>> explain human intuitions about morality, which are much more
>> complicated. In the above scenario no one would ever argue about
>> morality! It may even be questionable whether the "values" of the
>> ideal Bayesians referred to above are like what humans think of as
>> "values", any more than activation of a simple numerical reinforcement
>> center is what a human thinks of as "fun".
>
> I was following you, and agreeing with you, until you got to this
> paragraph. It seems to me that ideal Bayesians can in fact argue
> productively about morality, just as they can argue productively about
> anything else. Of course they will not persistently disagree, and
> during their argument they each could not predict the other person's
> next opinion. But as long as in each possible state there can be a
> different combination of what is morally right and what each agent
> wants, then morals are just like any other matter of fact to argue
> about. (Of course humans are not ideal Bayesians.)
I can imagine an ideal *humane* Bayesian arguing productively about
morality, or exchanging information about it. (The way ideal Bayesians
converge isn't really much like what we think of as "argument"...) But if
you have two AIXIs, there's no complex unfinished moral computation for
them to argue about. AIXI can't have "fun" either. Ideal Bayesian
wannabes can argue productively about morality, but only if their utility
functions have enough internal structure to be interesting.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Jun 20 2003 - 22:01:41 MDT