Re: Why Would Aliens Hide?

Robert J. Bradbury (bradbury@www.aeiveos.com)
Tue, 23 Nov 1999 05:01:02 -0800 (PST)

On Wed, 17 Nov 1999, Robin Hanson wrote:

>
> Let me paraphrase to see if I understand you. You seem to be saying
> that creatures at one star care only about the energy and metals at
> that star.

This comment seems to make clear what part of the problem is. I'm not sure if in any of the discussions, I have clearly stated my assumptions.

It may be unclear to others the problem I'm trying to solve. Since Extro3, I've been wrestling with the sole question of: "What are the limits to longevity?"
or, put another way (more related to the biology of aging): "How small can you make your hazard function?" or, perhaps in more economic terms:
"When does the cost of reducing your hazard function exceed the benefit in increased longevity derived from such a reduction?"

Now, if we look at the biology of aging, there seem to be three things that allow longevity:

  1. Highly effective hazard avoidance characteristics (e.g. hard exoskeletons or flight)
  2. size (making the small hazards irrelevant)
  3. intelligence (allowing hazard predictis & avoidance)

So, in my mind, the limits to longevity are highly interrelated to the questions of:

  "What are the limits to intelligence?"
  "What are the limits to "physical" size?"
  "What are the tradeoffs between tolerating hazards and fleeing them?"

Obviously you get into complex interactions, because greater intelligence allows you to make better tradeoffs between "tolerable" hazards and the costs of flight. Size is important because hazard avoidance gets very expensive when you may be dealing with solar-system mass objects.

Now, looking at these things, it would seem that one would want to grow to a size that maximizes intelligence, providing you with the greatest ability to predict hazards and minimize the costs of avoiding them. At some point additional increments in intelligence (if it is on a diminishing returns curve, as previously discussed) will fail to produce corresponding reductions in the hazard avoidance costs. At that point it seems, you stop "growing".

In my framework, any exploring by intelligent sub-entitites or division of "self", is likely to *increase* your hazard function and is an undesirable course. The intelligent sub-entitites have an increased hazard function because they are (a) physically smaller and (b) less-intelligent. The only situation in which intelligent sub-entities would be the correct solution is when you are faced with an unavoidable catastrophe (in which case their reduced mass allows them to escape more quickly). Division of "self" is likely to *increase* your hazard function (due to resource competition at some future time), unless you prevent self-evolution into something that can remove the "honor your parent/brother/sister forever" programming clause.

In my framework, many historical discussions regarding reasons for migration or colonization (e.g. religious freedom, the search for mates, curiosity [why go there, when you can observe it?], etc.) are probably irrelevant. The prime motivator of behavior revolves around the minimization of the hazard function for the maximally intelligent "being".

> Thus the only interest they have in other stars is to
> collect metal from those stars and bring it back home. This interest
> is temporary, because eventually they have more metal at home than you
> can use, given their limited local energy. Also, they have no interest
> in the energy at other stars, because energy is too expensive to
> immediately transport or to store awaiting a low cost opportunity.

Essentially yes. At some point additional mass (or additional energy) *may* fail to yield corresponding benefits in intelligence or longevity (they may even reduce them if they make you very massy/slow).

Now at this point, *if* you have effectively solved your personal survival (longevity) problem (i.e. you can predict and avoid hazards for billions to trillions of years [until energy resources get *very* scarce]), the question that I have *not* solved is:
What do you optimize?
or
What do SIs value?
Intelligence? Memory? Information? History? Creativity? Beauty? Art?

I'm not sure that we as humans could answer these questions. Without those answers, determining the logical structure of single SIs or SI cultures may be difficult. At that point I simply go back to the astronomical observations and see what we can observe that might offer suggestions.

>
> Empirically, I think this predicts that we will not see metal planets
> around visible stars. This is a prediction we should be able to
> verify in the next twenty years or so. I presume you use "metal"
> in the usual astrophysical sense of everything but H and He?

Yep, though metals from a SI sense are slightly different from the astrophysical sense in that there are *optimal* element "mixes" for the maximization of some of the "valuables" listed above. Collecting or manufacturing that mix may take some time.

We do from time to time observe stars with very strange metal abundances. Some of these have "proposed" astronomical theories. However, you could just as easily argue that stars or solar systems may be mined for the valuable materials and the rest is mine tailings.

>
> It also predicts we will not find effective ways to transport or
> store energy. But how hard would it be to surround a star with a
> parabolic mirror to direct starlight to a different system? Or to
> power a laser directed there? Perhaps you want to argue that the
> local demand for energy will also satiate, as well as the demand
> for mass.
>

Clearly you can beam energy around (as lasers or microwaves is probably best). Your collector dishes would have to be fairly large but you probably have arrays of this type anyway because you want to have huge (moon diameter) telescope "reflector" arrays for hazard detection. How effective this is would depend on the mass required for the reflectors & collectors and the distance that the energy is being beamed.

In my models, solar systems contain enough mass to fully utilize a star's energy output at thermodyanmic efficiencies around 95+%. To get to 99.9% or higher (where the outer computronium/memory storage shells are radiating waste heat at < 10-20K) may require large remote mass contributions or element breeding. However, additional increments at these outer levels pay a big price in communications overhead (which is why I brought in the models for evaluating parallel compulational architectures), so an additional 1% in mass, might allow the use of 0.01% more energy but may only add 0.000001% in "intelligence". The additional mass (by imposing higher time/energy costs on hazard avoidance) or additional energy (by increasing your heat signature to potential unfriendly observers(?)) may increase your hazard function to a degree that cannot be offset by the diminishing increases in intelligence.

I'll simply add, that a complete analysis of the declining benefits (i.e. some more solid #'s than those above) has not been completed.

Robert