Re: Fermi "Paradox"

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Jul 22 2003 - 09:16:31 MDT

  • Next message: Bryan Moss: "Re: Optimism [Was: flame wars]"

    On Tue, 22 Jul 2003, Anders Sandberg wrote:

    > The problem lies in determining whether the die out account or the
    > become invisible account is true.

    Perhaps neither -- you have to allow that ATC can *see* "everything".

    As I point out in the MBrains paper -- 100 billion telescopes the
    diameter of the moon are well within the reach of an ATC.

    Given speed-of-light delays limiting foresight are you going to
    expend a lot of resources going someplace only to get there and
    find it already occupied?

    > Historical evidence doesn't tell us much, since the rise and fall of
    > civilisations on average has not had much effect on humanity as a whole.
    > It is now when civilisations become global that the risks go up for
    > permanent failures.

    It is very difficult to eliminate ATC that have gotten to our level.
    There are thousands of people in submarines with nuclear power as an
    energy resource would likely survive a GRB -- and then there are the
    people within Cheyenne Mountain and similar facilities.

    And then there is the rapid evolution of humanity -- a few million
    years (and we know that multiple experiments were being conducted
    in the primate lineage during the period). Knock us back down to
    the chimpanzee level -- reasonably good odds nature would reivent us.

    > But it is not enough to assume that the probability
    > of civilisation crashing is high; it has to be so high that it results
    > in *no* expanding starfarers.

    They don't have to "crash" -- all advanced civilizations can simply reach
    the conclusion that there is *no point* to expansion. The reason that
    humans colonize is to have more resources for replication -- once one
    realizes that replication (beyond limited forms of self-replication that
    which allows one to trump the galactic hazard function) is pointless
    then one would logically stop doing it.

    > Given the awesome multiplicative power of
    > even simple self-replication, once a civilisation can start sending out
    > large numbers it is very hard to get rid of it.

    It is easy to produce lots of "simple" self-replicators -- but it isn't
    a good idea to do so. At least some of the bacteria in my gut would
    attempt to consume me if they could get around my immune system. Better
    not to give then lots of opportunities to do so.

    "Complex", and more importantly "trustable", self-replicators may be a
    very difficult problem. Do you *really* want to be standing toe-to-toe
    with a copy of yourself when the resources of the universe start drying
    up *knowing* that they know exactly what you know and you both know
    "there can be only one" (to steal a line from The Highlander)...

    > "(//((! Have you seen the new gamma ray burster in the Milky Way?"
    > "Yes /||\, I have. I hope there were no intelligent life around there."
    > "We will know when we send out or probes..."

    There seems to be a reasonable argument for the "galactic club" enforcing
    a "Thou shalt not send out self-replicating probes" interdiction -- because
    any advanced civilization isn't going to want to deal with the problems
    they create in the future.

    > But in general I think we are lacking something in the philosophy of the
    > Fermi paradox. We need to think better here.

    I think it relates to the transition from a "randomly evolved" intelligence
    (i.e. mutation and "natural" selection) into a "self-directed" evolutionary
    process intelligence.

    Question -- if you knew you were likely to survive until the "end of the universe"
    with high probability -- would you actively seek to create future problems that
    you would eventually have to deal with?

    I don't think I would.

    Robert



    This archive was generated by hypermail 2.1.5 : Tue Jul 22 2003 - 09:24:30 MDT