RE: Fermi "Paradox"

From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Tue Jul 22 2003 - 23:44:15 MDT

  • Next message: Giu1i0 Pri5c0: "Transhumanism for Dummies'"

    > Anders wrote:
    > > But it is not enough to assume that the probability
    > > of civilisation crashing is high; it has to be so high that
    > it results
    > > in *no* expanding starfarers.
    >
    Robert wrote:
    > They don't have to "crash" -- all advanced civilizations can
    > simply reach
    > the conclusion that there is *no point* to expansion. The reason that
    > humans colonize is to have more resources for replication -- once one
    > realizes that replication (beyond limited forms of
    > self-replication that
    > which allows one to trump the galactic hazard function) is pointless
    > then one would logically stop doing it.

    Well, life itself is essentially pointless, but we keep doing that. There's
    a great memeplex surrounding individualism, which is the basis for
    transhumanism, amongst many other ideas. Certainly it is the basis for
    desires for individual longevity and enhanced individual potency.

    Damien mentioned in another post an idea that I think is paramount at this
    point; the top level replicators are now ideas (or memes if you go in for
    them). Our human civilisation is the main character in it's own story, but
    it is also the theatre for the competition of ideas. Ideas have the same
    rules as genes; they need to survive and replicate.

    This is how you can explain someone sacrificing himself for duty to country,
    or deciding not to have children but instead to work and become wealthy. He
    does not spread his genes, or continue himself; instead, he replicates the
    ideas, or defends other holders of these ideas, and is rewarded for that by
    the ideas.

    OK, that's basic meme theory. And what it says about Robert's idea above is
    that it must also work for ideas; colonisation of the universe must not
    reinforce any stable memeplex conceivable for the "it's pointless" argument
    to hold.

    Possibly for individualism this is true. However, there are many imaginable
    (existing!) forms of collective thinking memesets for which colonisation has
    conceivable benefit, even if no material benefits are reaped in the lifetime
    of the actors.

    For instance, a strongly group-oriented collectivist nation might see the
    act of colonising the galaxy as glorifying the state; the sacrifice of the
    individual may be justified in such circumstances, even to the individuals
    themselves.

    Or, a set of religious memes which deify "creation" might conceivably morph
    into an urg to seed the stars with the work of the lord.

    Or maybe some civilisation bumps into another civilisation, and is so
    repulsed by it, or so confused by it, or brought so close to the brink of
    extinction by it, that the decision is made to spread everywhere, with the
    goal of eradicating all alien lifeforms and covering the universe with
    understandable things.

    In short, there are a lot of reasons that critters act. Unless you have
    *really* strong evidence, you must assume that anything will be tried by
    someone, somewhere.

    > > Given the awesome multiplicative power of
    > > even simple self-replication, once a civilisation can start
    > sending out
    > > large numbers it is very hard to get rid of it.
    >
    > It is easy to produce lots of "simple" self-replicators --
    > but it isn't
    > a good idea to do so. At least some of the bacteria in my gut would
    > attempt to consume me if they could get around my immune
    > system. Better
    > not to give then lots of opportunities to do so.

    Again, this is your opinion, but it's not the only possible one by any
    means.

    >
    > "Complex", and more importantly "trustable", self-replicators may be a
    > very difficult problem. Do you *really* want to be standing
    > toe-to-toe
    > with a copy of yourself when the resources of the universe
    > start drying
    > up *knowing* that they know exactly what you know and you both know
    > "there can be only one" (to steal a line from The Highlander)...
    >

    In the bullish times of universal expansion, I think it's fair to say that
    many would likely ignore bearish predictions of the ultimate resource
    crunch.

    > > "(//((! Have you seen the new gamma ray burster in the Milky Way?"
    > > "Yes /||\, I have. I hope there were no intelligent life
    > around there."
    > > "We will know when we send out or probes..."
    >
    > There seems to be a reasonable argument for the "galactic
    > club" enforcing
    > a "Thou shalt not send out self-replicating probes"
    > interdiction -- because
    > any advanced civilization isn't going to want to deal with
    > the problems
    > they create in the future.

    We would do it.

    >
    > > But in general I think we are lacking something in the
    > philosophy of the
    > > Fermi paradox. We need to think better here.
    >
    > I think it relates to the transition from a "randomly
    > evolved" intelligence
    > (i.e. mutation and "natural" selection) into a
    > "self-directed" evolutionary
    > process intelligence.
    >
    > Question -- if you knew you were likely to survive until the
    > "end of the universe"
    > with high probability -- would you actively seek to create
    > future problems that
    > you would eventually have to deal with?
    >
    > I don't think I would.
    >
    > Robert

    Again, it depends a lot on your point of view.

    For instance, many extropians appear to believe that they very well *might*
    survive until the end of the universe. How many of us have therefore moved
    our focus to extreme long-range environmentalism/conservationism? Not many I
    think; we tend to think that, rather than avoid digging holes, we should
    build better ladders. Why wouldn't that thinking extend to advanced
    civilisations?

    For instance, there's always been the question of what an MBrain actually
    thinks about. Maybe a civilization/entity/intelligence which can reach such
    a state gets concerned about finding a way out of the box (the observable
    universe). If an MBrain can't work it out, or can get close but not the
    whole way, then turning all of existence into computronium must begin to
    looks like the positive alternative to a slow, hopeless, lingering death.

    Emlyn



    This archive was generated by hypermail 2.1.5 : Tue Jul 22 2003 - 23:53:02 MDT