Re: Changing ones mind

From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Apr 07 2003 - 10:43:37 MDT

  • Next message: Anders Sandberg: "Re: Count your blessings"

    On Mon, Apr 07, 2003 at 08:09:43AM -0700, Robert J. Bradbury wrote:
    >
    > Ok, in the face of a number of testimonials that people
    > have "changed" their minds I am forced to concede
    > that this is indeed possible. The question *then*
    > becomes whether this is a character trait that
    > is more common on the ExI list (though there
    > are days when this really doesn't seem to be
    > the case) and less common in the general population?
    > (i.e. there is a selection effect for people more
    > willing to "change" their minds on the ExI list.)

    There is certainly a selection effect, but it is likely more complex.
    The XNTX dominance on the list seems to be one of the few statistically
    significant examples of this.

    > This would seem to relate closely to ExI principle 7:
    > "Rational Thinking" -- if one cannot think "rationally"
    > then how can one choose to "change" ones mind?

    Bu on the other hand transhumanists in general and we on this list
    doesn't seem to be that more rational about our lives or ideas than any
    other group of well-educated intellectual people. We have shared ideas
    and views to a great extent which makes us overlook the stupidity and
    irrationality sometimes hidden in our midst (I admit it, it happens all
    the time that I find myself accepting an argument that actually isn't
    very strong because it agrees with everything else I think I know or
    value).

    > This then relates to the question of whether we are anywhere
    > near having the computer/AI technology to determine whether
    > people are making rational arguments? (I'd like to browse the
    > Javien Forum with those as priority messages.) If we aren't
    > close to having the computer understand such arguments can
    > we get away with Bayesian filtering methods to do at least
    > some fraction of the job?

    Many arguments' validity hinge on so many other things that they
    probably cannot be regarded as rational or not, even when supported by
    evidence. As an example, take the recent metastudies showing that the
    early middle ages was a warm period and the climate is shifting back
    towards this state, possibly without link to greenhouse emissions. These
    studies hinge on hundreds of other studies, which in turn are based on
    research of shifting quality and methods of different applicability.
    Some may complain about the uncertaintites of limnological data, others
    might worry about how much we can trust the individual researchers in
    the metastudies and the studies they build from. So using these
    metastudies as an argument for something is equivalent to basing the
    argument on a huge pile of assumptions or bayesian priors. Which we do
    all the time of course, chunking all these priors into worldviews and
    bad everyday reasoning ("Greenpeace always inflates their numbers"/"That
    study was founded by Evil Corporation"/"Professor Knödlerbach is usually
    right about this kind of things").

    I would love to build a graphical computer-aided system for building
    rational arguments, even if they were lacking in the depth or complexity
    we see in ordinary discussions. It would be great to use as a skeleton
    for building scientific theories or thinking about what to research
    right now.

    -- 
    -----------------------------------------------------------------------
    Anders Sandberg                                      Towards Ascension!
    asa@nada.kth.se                            http://www.nada.kth.se/~asa/
    GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
    


    This archive was generated by hypermail 2.1.5 : Mon Apr 07 2003 - 10:47:01 MDT