On Wed, 3 Nov 1999, Billy Brown wrote:
> No kidding. My question is: "Exactly what factors would create this
> particular selection pressure, rather than some other one."
This depends on the "goals" of the species. We are talking conscious entities here, so in theory they are free to pick their own goals.
If the goal is "longevity within the limits of the known universe" then there is a self-generated selective pressure to avoid hazards (black holes, blazars, etc.)
If the goal is "escaping from a doomed universe", then the selective pressure is to "think" or "architect" your way into a better universe.
Given our current knowledge of physics, I can't say which of those SIs would want, but I propose that that they do know. Since I would hope that the laws of physics are universal, I would think that all SIs adopt only one of those two perspectives.
> No, but we can make some interesting observations. First off, allow me to
> point out that the light speed barrier does not impose a limit on the scale
> of computing devices.
No, the theory of gravity imposes a limit on the scale of computing devices. To much computronium in too small a volume and you become a black hole. The speed of light imposes a limit on computational "throughput". If you have a "cellular automata" architecture where all you have to do is communicate with your nearest neighbor, this isn't too bad (the limits in this case are thermodynamic because as you compute at lower & lower temperatures you have to make your radiators bigger, thereby decreasing the rate at which you can pass your "state" information to your neighbor). If on the other hand you have a shared memory architecture then you have synchronization issues where the time it takes to verify that no processor node has a memory lock is a severe limit on the computing throughput. Those limits will be determined by interprocessor distances where synchronization times are speed of light limited.
> If you have a closely spaced cloud of computing systems, and you add
> more nodes to the outside of the cloud, the power of the overall
> system will increase no matter how big the cloud has become.
You can't do this. You hit the thermodynamics limits. In a Matrioska Brain architecture you harvest stellar power at the highest temperature materials allow (e.g. Tunsten carbide rod-logic computers) and radiate the waste heat outward through successively lower temperature computing layers until the final layer that radiates at ~ the cosmic background. You have a computer with physical dimensions several times larger than the solar system (if you have sufficient material to build it). The computers on the outside can't absorb energy (from inner computers) at 2.7K and radiate at 2.7K because you can't do any work. If they generate their own energy (say using fusion reactors), they still have the problem that they *must* absorb the heat being radiated from the inner computers and radiate that plus the heat that they generate. You get into a runaway radiator sizing problem.
The compute nodes on the outer shells are 100's of km apart (assuming each node is funneling 100 KW of power through a 1 cm^3 nanocomputer type device). The propagation delays between nodes are *not* insiginficant in these cases.
> The light speed barrier only limits your ability to apply all of
> that power to a single sequential calculation.
No, the light barrier has fundamental implications on how fast you can synchronize your computations. If I moved all the neurons in your brain 1 km apart but left the interneuron transmission speed the same *you* would think slower (if at all). If you think slower it takes you longer to solve a problem. *Only* if it is the type of problem that can be subdivided (e.g. molecular modeling) and fits well on a cellular automata architecture (which molecular modeling does) do you have problem solution times scaling nicely. If you are dealing with "cross-pollination" of memes that are stored on opposite sides of a solar system then the delays *hurt*!
> Stars are terribly wasteful, after all - they convert so much mass
> into a form that is terribly inconvenient to collect.
But they *may* just be the best way to produce heavy elements. We are not *energy* constrained! We are construction material constrained due to the scaling laws for your radiators. Your radiator size scales inversely with T^4 -- low temperature radiators require a *lot* of material.
> A frugal civilization would convert them all into some form optimized
> for long-term storage - perhaps a large number of brown dwarfs, or
> black holes of carefully-chosen mass. They would sweep up all matter
> that might fall into black holes, collect the energy radiated by neutron
> stars, and generally suppress all energetic events in favor of
If energy conservation were your concern this would be true. However at the current stage of development of the Universe, we want computronium materials, not energy. The situation will be reversed at some point in the future (depending on the rate at which they/we convert material into heavier elements).
> My point, however, is that the universe we see is not particularly
> optimal for anything at all.
That is a statement which cries out for evidence.
> That is inconsistent with the idea that SIs exist, because if the
> do then the entire universe should be sculpted to fit their goals.
Precisely. That is why I devote the time I do to looking at the astronomical evidence (esp. "unexplained" large-scale phenomena that require the invention of improbable answers) to see if they might indicate that there really are SIs.
> Caveat: If there is no possible way of getting around the light speed
> barrier, then they will have altered only a portion of the observable
> universe. In that case the demarcation should be obvious, and our entire
> galaxy is on the "undeveloped" side.
You should say "our entire *VISIBLE* galaxy". Given that 90% of the mass is missing and we have evidence for 200 billion "thingys" we can't see, the "undeveloped" claim is entirely in the eye of the viewer.
> It isn't an assumption. No matter what problems you need to solve,
> controlling more mass/energy lets you do a better job of it.
No, that is simply wrong. You can control a huge amount of mass but you can't get it close enough to have higher computational throughput because you fall out of the universe. You can control a huge amount of energy but you can't use it any more effectively because your *melt* yourself.
Go read Nanosystems, Sections 11.5 and 12.8 -- quoting from the later "In nanomechanical systems of sufficiently large scale, the ~10^12 W/m^3 power dissipation density of ~1 GHz nanomechanical logic systems described here ***exceeds any possible means of cooling***."
The cooling requirements limit the single-node computational capacity and the material from which the node is constructed determines the operating temperature and therefore the radiator size and thus the inter-node propagation delays.
> If there are big computational problems to solve, converting the
> whole galaxy to J-brains will give you a better chance of solving them.
*Only* that subset of all possible problems that can be divided into data subsets that can be solved on "local" computational resources. If the problem is bigger than the local node (or collection of nodes) then you have to pay the price of the communication delays.
I can have a virtual reality encompasing an entire M-Brain full of uploads in one solar system. Synchronization and communication between that entity and another similar entity is going to be both *slow* and *expensive* (and that isn't even a problem that requires tight synchronization).
> If the universe is doomed to eventually run down, a large-scale
> conservation project will keep you alive longer.
I thought about this. Do you want to build M-Brains around big stars or small stars? It turns out that the lifetime computation available is roughly the same (this is determined by stellar physics). So there seems to be little difference in whether you run your virtual reality "fast" or "slow". If you run it "slow" and are interested in the external universe, you get to watch more of it.
> If it is possible to profit from aggression, mobilizing all
> available resources will be essential to both defense and predation.
> The list goes on and on.
This goes back to my comment on "enforcing" directives across galactic distances. *Only* if the remote SI/MBrain etc. is designed by you to be non-self-modifying do you have a chance of this occuring. If it is a conscious self-modifying entity then it may well decide its interests & goals are much different from the mission you gave it.
Defense may be a small requirement because of the difficulty of destroying solar system sized sturctures (down to the atomic level) across interstellar distances. Predation may be pointless due to the aforementioned physical limits.
> The only scenario I can see in which there would be no expansion is one in
> which all SIs undergo some mental transformation that leads them to end
> their own existence.
This comes back to marginal benefits. You have to make a case that the marginal benefits for expansion *exceed* the marginal benefits of letting some fraction of the galaxy/universe evolve naturally. To me this is as simple as the question of -- Do you eat the last tiny crumb of food on your plate? Generally not, if you have eaten all of what you need for sustenance.
> I don't see that as a plausible scenario (especially since they
> haven't ended *our* existence as well). Do you have another one?
A plausible scenario is that they evolve to local limits and then go to sleep waiting for more computronium to evolve or something worth watching (that they haven't watched 10,000,000 times before) occurs. Or they put up their automated defense systems and spend all of their attention evolving an ever more complex internal virtual reality.
> And I've already referred to this line of thinking as the "Big Universe
> Fallacy". Interstellar trips would be long and boring for an SI, but
> there is nothing even remotely difficult about them.
I agree that it is not difficult, but it *is* expensive! Can one of our rocket engineers compute the DeltaV requirements on a solar system mass object to bring it within planetary distances of a star within a few dozen light years?
> Sending out automated devices to do things is essentially free -
> the energy costs of even an ambitious program are tiny compared
> to the energy output of a star, and you really don't have to send
> more than a few grams of matter.
This remains a *very* open question. It isn't clear nanoprobes can survive the radiation of interstellar space. But leaving that aside it becomes a question of what is the point?
So you send out nanoprobes to convert a nearby star into an unpatterned SI. How do you transport your uploads to that SI? You are going to have one huge telephone bill. If you do relocation half of your uploads what does this get you? Your synchronization times between the two systems are going to make coordinated thought very slow.
If all you want to do is harvest matter, you are back to the DeltaV requirements. If you want to harvest energy and beam it back, you are back to the heat radiation problems again.
I can't see the usefullness of colonization.
> Give me just about any set of hypothetical goals, ...
Compute the complete future of galactic "hazards" as rapidly as possible.
> I was referring to *cultural* uniformity. This is another area I feel
> your analysis unduly neglects. Different people choose different goals,
> and use different means to pursue them.
Yes, but that is due to the fact that we are individuals with very low communications bandwidth. They say that married people become more similar over time. That must be because the resolve their "differences" in favor of one position or agree to expand to encompass both positions (as a pseudo-single unit). The communication bandwidth is so much higher and the "universal" knowledge so much greater in SIs that it is hard to imagine sub-SIs maintaining those differences as anything other than fashion statements.
I think sub-SIs evolve into highly similar entities. I think SIs reach a small set of optimal architectures due to the physical limits and convergent evolution.
> The historical trend has been for this diversity to grow as we increase
> our knowledge (and more importantly, our capabilities). Why do you
> expect this trend to reverse?
It depends on how much of the knowledge you have access to. If your knowledge is my knowledge and my knowledge is your knowledge then we might be on a trend to minimize differences. Now you may choose diversity, but it will be for aesthetic reasons rather than rational/logical ones. Rational/logical approaches would argue that for any specific problem there should be a *limited* set of "best" solutions that is much smaller than the set of "all" solutions.
> But if you really want to explore possible paths of evolution this is an
> absolutely terrible way to go about it. It would be far more effective to
> apply some of that SI intelligence to inventing abstract, high-level models
> of biological and memetic evolution. Then you can convert the solar system
> to a computing device and get your results in a fraction of the time. ...
No argument. If you can get the level of granularity you want in the simulation then this is a much better way to do it. By this argument the SIs are too busy running these simulations to be bothered to clean up the crumbs of the galaxy that are currently left. If so then they are not likely to be here which is just fine by me since it means we get to surf the wave without having to worry about being turned off.