From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 20 2003 - 19:46:10 MDT
Peter C. McCluskey wrote:
>
> I can understand Bayesians holding different values, but I don't
> understand how that enables disagreement. What stops them from rephrasing
> their differences along these lines: the values that the Cherokee people
> want to maximise are best served by killing any white man who claims to
> own the land, and the values held by the group of white men who want the
> land are maximised by killing the Cherokee (I changed the value differences
> a bit to make it easier to describe them with less ambiguity and to make
> it clear that I'm assuming no compromise involving a monetary payment would
> eliminate the dispute).
> One possible answer I thought of was that nothing requires them to
> notice that they are using "belongs" in ways that are ambiguous and
> inconsistent. I'm having trouble analyzing this. If I assume that they
> believe values can't cause Bayesians to disagree, they should deduce from
> the apparent disagreement that something such as an ambiguous word is
> causing the disagreement, and they should correct it by speaking more
> precisely. But if instead I assume they believe values CAN cause Bayesians
> to disagree, it's unclear whether they have to notice the ambiguity.
The difficulty here stems from two different senses of the word
"disagreement". Ideal Bayesians who "disagree about values" still cannot
"disagree about facts". That is, having different values does not allow
ideal Bayesians to disagree about facts, including the fact of who assigns
what values. Perhaps this means that the term "disagreement" should not
be used for differing values, and we should simply say that Bayesians may
"assign different values".
This confusion stems from treating "value" as an abstract variable with
deixis; if we apply the term "value" to a specific Bayesian, we fill in
that Bayesian's specific values. Both X and Y agree that X's values say Q
and that Y's values say P, but if you abstract out the specific X and Y
and talk about the abstract variable "values", which takes on the specific
values of the speaker, then X and Y "disagree" on "values" because they
agree that values(X) differs from values(Y). Think of X and Y as objects,
and values as an instance method. We say that X and Y "disagree" because
they give different outputs for the "values()" function call, but they
both agree on the output of "X.values()" and "Y.values()".
The problem is that the above is about ideal Bayesians who have not
evolved to argue about morality, and it does not even begin to fit or
explain human intuitions about morality, which are much more complicated.
In the above scenario no one would ever argue about morality! It may
even be questionable whether the "values" of the ideal Bayesians referred
to above are like what humans think of as "values", any more than
activation of a simple numerical reinforcement center is what a human
thinks of as "fun".
One oversimplified way in which you can have a genuine disagreement about
values is if there is a common computation Values() and that X and Y have
a disagreement of fact about what output Values() produces, with no deixis
or curried functions involved. But of course that is not how human values
work either. It's a bit complicated. See my post in this thread on 6/17.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Jun 20 2003 - 19:55:58 MDT