RE: Fermi "Paradox"

From: Lee Corbin (lcorbin@tsoft.com)
Date: Wed Aug 06 2003 - 19:12:11 MDT

  • Next message: Mike Lorrey: "Re: Orlowski: Your hate piece on Robin Hanson"

    Robert writes

    > > I do not see what safety has to do with it.
    >
    > "Distributed Replicated Intelligence"
    >
    > A human brain cannot avoid the hazard function imposed by
    > an earthquake, an asteroid, etc. However, if you have
    > redundant subcomponents distributed across 25 AU you can
    > trump those hazard functions. It is basic "fault tolerance"
    > in computing data center configurations.

    Yes, that figures.

    > For example do you want to "think" about a problem
    > that some other SI in the galaxy solved 20,000 or
    > even 2,000,000,000 years ago???

    Absolutely, if the answer has been misplaced.
    (Of course, we are talking about an era in which
    gratification may be obtained in any way wished,
    so there is a higher, aesthetic sense driving the
    selection of problems.)

    > > Yes, and getting back to another issue, *this* is why we
    > > would be so appealing for the beings of [Tralfamador] to
    > > colonize: we have plenty of unused cm^3s.
    >
    > But Lee, you are oversimplifying -- to use those cm^3
    > one needs (a) energy sources; (b) certain types of matter;
    > and (c) someplace to dump the waste heat.

    Yes, thank you for the reminder. I do lose sight of that
    now and then. I don't think it affects my conclusions,
    however.

    > One would anticipate that *if* one were going to colonize
    > one would select optimal locations with a mix of those 3
    > resources.

    Cannot quite agree: in the image I have of what will
    happen, one will colonize everything. And that is true
    whether the remote matter ought to be considered to be
    a version of oneself or not.

    > I have not changed my mind. Lee II's are a hazard -- but
    > perhaps naturally only over billions to trillions of years.

    Well, I do tend to ignore "threats" from others that won't
    materialize in a long time. (It's much as if I were stranded
    on a desert island with 10 other people and enough supplies
    for 30 years.) We have to have a *discount factor* operating
    here in your value calculations.

    > But it is *very* difficult to replace a component in a
    > remote star system if it is allowed to rise to your level
    > of development.

    I wonder why it is that I don't worry about that and you
    do? I assume that distant people (even in L.A.) probably
    aren't going to turn into threats in the foreseeable or
    unforeseeable future. There is a lot of living to do without
    such arcane concerns.

    > Even at light speed its 8 years round trip for information
    > to/from the Alpha Centauri system -- how do you deal with
    > something that has 10^42 OPS for 8 years (~10^51 OPS) to
    > figure out how to defend against something you may send
    > or it may send against you?

    Well, criticize this: they indeed have got a Jupiter brain
    going there just as the solar system has its. I dismiss
    the threat of *physical* aggression because the distances
    are too great, and the size and sophistication of the
    invaders would have to be too superior. But algorithms
    are another story. I have always pictured a "Wind from
    Earth" in the sense of frighteningly superior algorithms
    that local matter may or may not run as they are received
    electromagnetically. (You have stated that there are
    probably quickly arrived at upper limits to how advanced
    matter will get, and my statements here are modulo that.)

    Either you do not run these algorithms (and do not attempt
    to incorporate them into yourself)---and run the risk of
    falling further and further behind---or you do run them,
    and run the risk of having your identity changed a lot and
    in ways you don't anticipate. The evolutionary struggle
    never really ceases.

    But it would be silly for a local intelligence to seriously
    diminish its standard of living by spending resources attempting
    to anticipate and prevent developments at Alpha Centauri.

    > > Who wrote that? From the heading it appears to be Robert,
    > > but then if so he is referring to himself in the third
    > > person.
    >
    > Yes, it is me talking about myself in the 3rd person.
    > Sorry, I should have perhaps added a :-).

    Okay, and I probably should have read some " :-) "s into
    there, but attribution is not done as responsibly in general
    as in the good old days, and I was just worried.

    Lee



    This archive was generated by hypermail 2.1.5 : Wed Aug 06 2003 - 19:22:32 MDT