On Tue, 23 Nov 1999, Robin Hanson wrote:
> ... commenting on my comments about optimization of longevity ...
> It seems that you assume that:
> Virtually all advanced creatures in the universe care essentially
> only about their *individual* longevity.
> And though you have not said so explicitly, you seem to have "fast
> high-communication computation" concept of the individual. That is,
> you seem to preclude a creature who's concept of itself includes
> things, like computers around distant stars, that can't contribute
> to computations which must be done within a short time and require
> high levels of communication between parts. After all, a creature
> focused on the longevity of its clan or species might act very
True, my assumptions are based on the carry over of the "individual" genetic drive for self-preservation. If current models for evolutionary systems and natural selection are "typical", traits promoting individual survival are strongly prefered over group trait selection. You only have to look at the #'s of species (millions) where individual selection rules over the few species (perhaps a few collective species such as bees, ants or termites) where group selection may play a role.
I also assume that there is a significant pro-longevity benefit to increased intelligence. As Billy pointed out, we may be approaching the point where the hazard level is small relative to our intelligence level. So, if intelligence trumps hazards to the point where they become insignificant, then the costs of long-distance communciations will be less significant.
> You seem to want to allow these creatures to place a small value on
> things like art, but if you allow that, I can't see why you don't also
> allow them to place a small value on colonization, and then we have
> to explain the lack of colonization again. Similarly, you have to
> assume that virtually all creatures have this singular focus on
> "individual" longevity, to explain the lack of colonization.
I didn't intend to imply that. It may be that all they do is focus their internal thought capacity (above that required for survival levels) on new and ever more creative art forms. I probably would argue that the creation of art "internally" (virtually) makes more sense than "externally" (rearranging stars into constellations, etc.) because it is less expensive. At the level of "thought", it seems internal reality and external reality are the same, so rationally you would use the cheapest materials, i.e. virtual art.
The same reasoning would probably apply to colonization. Virtual colonization could be cheaper than real colonization. You create an internal "solar system", send off a virtual probe and see what happens. Since you may be able to run the simulation faster internally than in external reality, it may be more desirable to do so. Of course this depends a lot on the size of the simulation you want to run and the granularity of your underlying reality.
Real colonization would be desirable if you do not have the internal capacity to run virtual colonization exercises at very fine granularity.
I don't have to assume that "all" SIs/species/collective-individuals have a personal maximization perspective. I do have to assume that a potentially colonizing species would recognize this as one very valid, perhaps even dominant, developmental path for ETC. In that case, colonization becomes a limited and/or stop-n-go effort. Paraphrasing an earlier question -- would Columbus have voyaged to the new world if he knew that it might be populated by Indians armed with AK-47's? If the answer to that is no, then colonization doesn't happen unless you know for sure your target isn't a Jupiter Brain masquerading as a Jupiter (for whatever reasons). If the answer to that is yes, then Columbus wouldn't have been launching 4-ship voyages of discovery, he would have been launching the Spanish Armada.
It seems to me that the discussion of colonization must rest upon a relatively strong guarantee that you will be occupying "unowned" resources. If you can't guarantee that, then sending a probe or a star ship makes no sense. You had better go there with all the intelligence, mass and energy you can muster. That means you take the entire SI.
> I accept that the ultimate test of your assumption is empirical,
> and so we should consider whatever empirical evidence you offer.
First and foremost is the missing mass/dark matter problem. When you have 90% or more of the mass in galaxies "missing" and physicists have to make up "new physics" to explain that, I consider this to be a problem. I'll be explaining this in more detail in a forthcoming discussion.
Second is the gravitational microlensing obervations. See: http://abcnews.go.com/sections/science/DailyNews/darkmatter980817.html
The papers behind this news item document the possibility of hundreds of billions of "objects" with masses averaging ~0.3 M_sun. The best explanations for this to date are primordial black holes (Alcock's proposal) and large numbers of white dwarfs with hydrogen atmospheres (from work on the Hubble Deep Field). However both of these run into problems with other astronomical theories and observations. As an aside, Alcock's explanation may be of significant concern because if the universe is populated with large numbers of primordial black holes, we may have big problems avoiding a relatively "rapid" (on cosmic time scales) loss of both matter and energy from the universe.
> But if you think there are theoretical reasons for us to think
> your assumption plausible, you need to clarify them. All
> creatures valuing only longevity is not implied by these
> creatures being conscious, nor is it implied by an evolutionary
> selection of creatures.
No, but conscious (intelligent) creatures valuing longevity *will*
become the dominant population over time due to simple selection
effects. Sooner or later the creatures/species that don't have this
as a value will have fatal accidents. Even if the traits of
"maximizing personal longevity" and "intelligence" are very
rare, over time they should come to predominate. This goes back
to the Drake equation and the "L" (longevity of "communicating"
technological civilizations) parameter. All the articles ignore or
do a lot of hand waving when it comes to the consideration of "L" of
billions of years. If you allow that, then the dominant population
in our galaxy *should* be advanced ETC. However, if you split
the parameter into:
L_c - longevity of civilizations in a radio-communicative stage L_si - longevity of civilizations at a post-nanotech SI stage then things make much more sense. The Drake equation and classical SETI are concerned with L_c, which if we are an example is probably between 100 and 200 years. Astronomical observations however must be interpreted in the light of L_si and that seems to be an indeterminate (large) value depending on the tradeoffs between hazards and how much matter and energy SIs want to devote to minimizing their hazard function.
> The only plausible scenario I can imagine for producing this
> situation is if a single power arises early in the universe and
> quickly fills the universe without actually colonizing it much,
> and actively destroys any creatures with other values.
The jury is out IMO. I believe I've stated that the colonization discussions make complete sense if your civilization happens to be first. And someone, someplace *has* to be first. I don't think we know enough to estimate the probability of multiple intelligent ETC arising in a galaxy simultaneously. If that is high, then we are back to my stop-n-go colonization, followed by wars, mergers and acquisitions. The data cited above would suggest that we are in the post-war/merger stage.
On the other hand if maximization of local intelligences or maximization of different ideas or cultural diversity are guiding principles, then it makes sense why we still see stars. There is an undeveloped field of economics involving "Abundances and Scarcities in Galaxies dominated by Superintelligences" that really needs some work. :-)