Re: Fermi "Paradox"

From: Kevin Freels (megaquark@hotmail.com)
Date: Tue Jul 22 2003 - 11:44:42 MDT

  • Next message: Kevin Freels: "Re: The establishment of a Historical Database/Discussion"

    "I think it relates to the transition from a "randomly evolved" intelligence
    (i.e. mutation and "natural" selection) into a "self-directed" evolutionary
    process intelligence.

    Question -- if you knew you were likely to survive until the "end of the
    universe"
    with high probability -- would you actively seek to create future problems
    that
    you would eventually have to deal with?

    I don't think I would.

    Robert"

    This thread is the best solution to the Fermi paradox I have heard yet. But
    it seems a bit chopped up. Here is what I get from this discussion:

    Once an intelligent species enters into a period of "self-directed"
    evolution, it becomes a "master of matter". At this point, all that is
    needed is raw materials to create all the resources it needs to survive. It
    becomes cheaper and more efficient to stay where they are than it is to
    expand throughout the universe. Especially since almost unlimited energy can
    be drawn from what we would consider finite resources.

    This change occurs long before the capability to "travel" the universe, so
    intelligent species tend to stay "close to home"

    Anyways, that's what I read into it and it makes sense. Either that, or I'm
    a crackpot and I imagined reading that into it. Still, it seems to make some
    sort of sense. Anyone want to elaborate?

    ----- Original Message -----
    From: "Robert J. Bradbury" <bradbury@aeiveos.com>
    To: <extropians@extropy.org>
    Sent: Tuesday, July 22, 2003 10:16 AM
    Subject: Re: Fermi "Paradox"

    >
    > On Tue, 22 Jul 2003, Anders Sandberg wrote:
    >
    > > The problem lies in determining whether the die out account or the
    > > become invisible account is true.
    >
    > Perhaps neither -- you have to allow that ATC can *see* "everything".
    >
    > As I point out in the MBrains paper -- 100 billion telescopes the
    > diameter of the moon are well within the reach of an ATC.
    >
    > Given speed-of-light delays limiting foresight are you going to
    > expend a lot of resources going someplace only to get there and
    > find it already occupied?
    >
    > > Historical evidence doesn't tell us much, since the rise and fall of
    > > civilisations on average has not had much effect on humanity as a whole.
    > > It is now when civilisations become global that the risks go up for
    > > permanent failures.
    >
    > It is very difficult to eliminate ATC that have gotten to our level.
    > There are thousands of people in submarines with nuclear power as an
    > energy resource would likely survive a GRB -- and then there are the
    > people within Cheyenne Mountain and similar facilities.
    >
    > And then there is the rapid evolution of humanity -- a few million
    > years (and we know that multiple experiments were being conducted
    > in the primate lineage during the period). Knock us back down to
    > the chimpanzee level -- reasonably good odds nature would reivent us.
    >
    >
    > > But it is not enough to assume that the probability
    > > of civilisation crashing is high; it has to be so high that it results
    > > in *no* expanding starfarers.
    >
    > They don't have to "crash" -- all advanced civilizations can simply reach
    > the conclusion that there is *no point* to expansion. The reason that
    > humans colonize is to have more resources for replication -- once one
    > realizes that replication (beyond limited forms of self-replication that
    > which allows one to trump the galactic hazard function) is pointless
    > then one would logically stop doing it.
    >
    > > Given the awesome multiplicative power of
    > > even simple self-replication, once a civilisation can start sending out
    > > large numbers it is very hard to get rid of it.
    >
    > It is easy to produce lots of "simple" self-replicators -- but it isn't
    > a good idea to do so. At least some of the bacteria in my gut would
    > attempt to consume me if they could get around my immune system. Better
    > not to give then lots of opportunities to do so.
    >
    > "Complex", and more importantly "trustable", self-replicators may be a
    > very difficult problem. Do you *really* want to be standing toe-to-toe
    > with a copy of yourself when the resources of the universe start drying
    > up *knowing* that they know exactly what you know and you both know
    > "there can be only one" (to steal a line from The Highlander)...
    >
    > > "(//((! Have you seen the new gamma ray burster in the Milky Way?"
    > > "Yes /||\, I have. I hope there were no intelligent life around there."
    > > "We will know when we send out or probes..."
    >
    > There seems to be a reasonable argument for the "galactic club" enforcing
    > a "Thou shalt not send out self-replicating probes" interdiction --
    because
    > any advanced civilization isn't going to want to deal with the problems
    > they create in the future.
    >
    > > But in general I think we are lacking something in the philosophy of the
    > > Fermi paradox. We need to think better here.
    >
    > I think it relates to the transition from a "randomly evolved"
    intelligence
    > (i.e. mutation and "natural" selection) into a "self-directed"
    evolutionary
    > process intelligence.
    >
    > Question -- if you knew you were likely to survive until the "end of the
    universe"
    > with high probability -- would you actively seek to create future problems
    that
    > you would eventually have to deal with?
    >
    > I don't think I would.
    >
    > Robert
    >
    >
    >



    This archive was generated by hypermail 2.1.5 : Tue Jul 22 2003 - 11:44:36 MDT