Re: Why believe the truth?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 18 2003 - 09:01:05 MDT

  • Next message: Harvey Newstrom: "RE: greatest threats to survival (was: why believe the truth?)"

    Robin Hanson wrote:
    > At 11:04 PM 6/17/2003 -0400, Eliezer S. Yudkowsky wrote:
    >
    >>> ... different people work on different topics, and hope to combine
    >>> their results later. When discussing each topic, one tries to
    >>> minimize the dependencies of results in this area to results in other
    >>> areas. ... I am trying to make our discussion of "why believe in
    >>> truth" be modular with respect to the very contrarian position that
    >>> our goals are very different from what evolution has given us, or
    >>> that the world will soon be very different from what evolution has
    >>> adapted to. The fact that you and I might happen to agree with this
    >>> contrarian position is besides the point. My first priority is to
    >>> make our conversation be accessible and relevant to the majority who
    >>> do not share this contrarian position.
    >>
    >> Deliberately strive for modularity? In a consilient universe? There
    >> is only ever one explanation. In it, all the pieces fit together
    >> perfectly, without strain. Any divisions in that explanation are
    >> artificial - human biases. I would not take it as a good sign if my
    >> theories about one part of cognitive science were consonant with many
    >> possible alternatives elsewhere; it would be a sign that the theory
    >> was inadequately constrained by the evidence. ... Your first
    >> priority should be to discover what the real answer is about the
    >> usefulness of rationality. When you know the real answer, then worry
    >> about how to explain it ... This is one of the reasons why I am not
    >> an academic...
    >
    > Sure there is only one total explanation, but are you Eliezer S.
    > Yudkowsky going to discover all of its parts by yourself and then reveal
    > them all to the world in one grand revelation? We are rich, in
    > knowledge as in other things, because of a division of labor. You need
    > to work with many other people to discover the one true explanation. So
    > you need to find a place within this division of labor where others can
    > appreciate your contributions and you can appreciate theirs. I fear you
    > have fallen for the "Dream of Autarky"

    Yes, as a matter of fact that does happen to be a major dream of mine.
    That applies after the Singularity, though, not before.

    This aside, whether many people or a few people are working out their
    areas of the One Explanation, my point is that I would not strive for
    modularity in my maps unless I thought that reality itself was modular
    with respect to the thing I was mapping. This idea of building a
    philosophy that is modular, where you can stand regardless of who else
    falls... it may help on public relations but how can it possibly be
    *right*? Isn't this a sign that one has fallen into "mere
    philosophizing", unconnected from the rest of the universe? As Dan
    Fabulich put it:

    Dan Fabulich wrote:
    >>
    >> Nature doesn't work that way in constructing explanations; how could it
    >> be a good method for discovering them?
    >
    > My first response: "I would least expect to get this kind of argument
    > HERE, of all places! Isn't it rather the point that we can do a bit
    > better than nature?"

    When I am *designing* something, then yes, I will try and make the design
    modular because that is a good heuristic for humans to use. When I am
    trying to *discover* something I will not try to make the *explanation*
    modular unless I think the *reality* is modular - the purpose of the map
    is to correspond to the territory, after all. If you can build a theory
    that's modular, just because you want it to be modular, doesn't that mean
    you're inventing something rather than discovering something? Is this not
    the very essence of "mere philosophization", building maps unconstrained
    by territory?

    If I am an interdependent programmer, I will try to write modular code and
    use other people's modular code. But if I am an interdependent explainer,
    I will try to constrain other people's explanations and be constrained by
    other people's explanations. It is a very different task than
    programming. I can clearly see the *memetic* benefits to having theories
    that are small and modular and persuasive whether or not you have the
    background in a particular field, and whether or not you accept the
    majority or the contrarian view, and so on, but is this not directly
    opposed to the way things really are? Doesn't it mean that you are just
    making stuff up, in which case you may as well make stuff up that's easy
    to package for sale? "This is why I am not an academic"; because emergent
    effects in academia produce forces on the map that do not correspond to
    forces on the territory.

    Robin Hanson and I both seem to agree that, given the facts as we know
    them, including contrarian facts, the best move is, in fact, to be
    rational. Fine. *With* this fact and the real reasons for it understood,
    we can go over the web of interdependencies and see if there are
    generalizable arguments that can be modularized away from correct but
    controversial beliefs. And indeed there are; for example, "rationality is
    a guard against unknown unknowns" - if you adopt the perspective of
    someone who doesn't know about the Singularity, then the Singularity is
    exactly the sort of unknown that rationality is meant to guard against,
    and many other similar possibilities would qualify as well. Or if you
    adopt the perspective of someone who doesn't know about evolutionary
    psychology, then it turns out that without this tremendous complex
    background in evolutionary psychology and game theory and all these other
    highly rational disciplines, there's no good way to calculate whether to
    be rational; and the generalizable argument here is, "If you're not
    rational, how can you figure out whether or not to be rational?" So there
    are generalizable, modular arguments, but these modular arguments have to
    *follow* the consilient theory. If your map is accurate than you should
    not be surprised if it starts looking consilient, frequently contrarian,
    and interdisciplinary; that is just reflecting the territory.

    I am sympathetic to the need to come up with modular *arguments* for good
    ideas - stretching back the inferential chain to the knowledge base of a
    general audience. Even here there are, in my opinion, strong constraints;
    one must come up with valid arguments, meaning arguments that you yourself
    currently take into account as evidence. The task is not persuasion or
    rationalization, but finding a subset of the valid arguments that are easy
    to explain, accessible to many disciplines, requiring a minimum amount of
    new material as prerequisite. This is the task of building a paved road
    down which anyone may travel - a concrete highway of strong, obvious
    evidence. But while we are blazing the first dirt trail and trying to
    discover where the path *goes*, we must have a sharp distinction between
    forces that constrain human argument, and forces that constrain the
    territory and its map.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 09:11:05 MDT