Re: Rationality of Disagreement (Was: Status of Superrationality)

From: Hal Finney (hal@finney.org)
Date: Sat May 31 2003 - 16:45:55 MDT

  • Next message: Hal Finney: "Re: Status of Superrationality"

    Jef Allbright wrote:
    > Robin Hanson wrote:
    > > The argument is *not* that eventually rational agents must come to
    > > agree if they share enough experience and evidence. It is that they
    > > must agree *immediately*, merely due to knowing each other's opinion,
    > > without knowing their supporting evidence.
    >
    > I would expect the two Bayesians to immediately accept that each of their
    > viewpoints are equally valid within each one's estimate of the range of
    > uncertainty, but it seems to me that for them to immediately and absolutely
    > agree on the issue would require certainty that they perfectly understand
    > each other's comprehension of the issue. For non-trivial issues I think
    > this perfect understanding is almost never the case.

    While Robin says that they would agree "immediately", his result describes
    a process where the agents would exchange information about their beliefs,
    update their beliefs based on what they hear, and exchange information
    about the updated beliefs. They do come to agreement pretty soon, and the
    only information that has to be exchanged is the other person's opinion.
    I think that's what he means by "immediately".

    I wrote up a long description of a simple example which shows how this
    might work, at:
    http://forum.javien.com/XMLmessage.php?id=id::CXYQFktI-RH4a-Wgki-YXwg-PRttPgVTJBtH.

    Another counter-intuitive aspect of this result is that the "path"
    to agreement is random. Intuitively, we might become convinced that
    disputants will come to agreement eventually just by knowledge of each
    other's opinions, where one will gradually come to accept that the other's
    stubbornness is a sign that he must have good grounds for his beliefs.
    So we would expect that each side would cling to his own position,
    until finally one or the other capitulates.

    This is not at all what Robin's paper predicts! Instead, for a typical
    dispute, say one with just two possible positions, each participant has
    a 50-50 chance of changing his mind on every round. Here is a typical
    dialog among rational disagreers:

              AMY BOB

    I don't think capital punishment I believe capital punishment
    reduces crime. does reduce crime.

    Oh, really? Then I now think that You don't say! In that case I
    capital punishment does reduce now believe that capital punishment
    crime. does not reduce crime.

    Well! Therefore I have now gone back Oh? Then I now believe again that
    to my original position, that capital capital punishment does reduce
    punishment does not reduce crime. crime.

    Hmmm... Even so, I will stay with my Wow. In that case I will now accept
    previous position that capital punishment your original statement, that capital
    does not reduce crime. punishment does not reduce crime.

    Then we agree! Then we agree!

    This should be read as though the two parties are saying their lines
    simultaneously, Alice and Bob reacting to the statements by the other
    party that are on the line above. They don't hear what the other person
    says on the same line until after they finish saying it.

    As you will see, each side changed sides several times, until one
    side did not change sides while the other did, and then they agreed.
    Believe it or not, this is the predicted path to agreement among people
    who are mutually rational.

    Needless to say, this is a highly unusual dialog. I would venture to
    suggest that virtually no one in the history of the human race has
    ever engaged in an argument about a contested issue which followed
    this pattern.

    Now, I think one reason is that this is the discussion which will occur
    if people are constrained to only describe their opinions. And in that
    case, there is a problem. While we can guarantee that people will come to
    agreement, there is no guarantee that the agreement is correct. In fact,
    the agreement reached by this method may not be the agreement which would
    have been reached if each had access to all of the information held by
    the other. The latter agreement is more informed, and therefore more
    useful for survival.

    So our conventional pattern, which is to exchange the reasons for our
    beliefs, is actually superior in terms of providing information useful
    for survival. Nevertheless, I suspect that upon an exchange like this,
    where each party says "I believe X for reason Y", it would be appropriate
    for the participants to adjust their beliefs along the lines above.
    They might continue to discuss and give their reasons, but their
    beliefs about which side is likely to be right should switch back and
    forth randomly. It might even be that once they reach agreement, they
    would want to continue to exchange information about the pros and cons
    of their position, and this might cause them to fall out of agreement
    at some point, which can't happen with the simple model above. But by
    adopting this more complex process, more information is exchanged and
    people make better decisions.

    Hal



    This archive was generated by hypermail 2.1.5 : Sat May 31 2003 - 16:58:11 MDT