On Fri, 17 Sep 1999, Robin Hanson wrote:
> That is valuing computation as an intermediate good, not as an end good.
> It is very different from claiming that aliens value only computation.
> One can easily value other things and yet choose to let technology grow.
> That's what we all do now. I could more easily argue that natural
> selection will favor civilizations that try to expand spatially.
I will agree that spatial expansion will trump intelligence if the intelligence isn't being used effectively. We are now just borderline between using our "intelligence/capacities" to predict/eliminate hazards vs. expand spatially so our failur to predict/eliminate is a hazard.
However spatial expansion tends to be defined by "limited" intelligences. A "maximal" intelligence would presumably occupy all of the available computronium and eliminaite any subintelligences within its sphere.
Here, I think is the fundamental point of our discussion --
What is maximally survivable -- the largest distributed
intelligence -- or the densest local intelligence.
I would argue that (b) the densest local intelligence has
the greatest surivival potential provided it can predict
and track events based on physical laws in its local universe.
I would argue that (b) the densest local intelligence has the greatest surivival potential provided it can predict and track events based on physical laws in its local universe.
> > > And I'm sure we could ifeden.computational problems that are so
> > > hard that one could compute them more quickly by sending out probes
> > > to turn the universe into computers, rather than just using one system
> > > to compute with. How can you know that advanced creatures aren't
> > > interested in such problems?
> > Yes, but there is probably only a small set of computational
> > problems where the data can be separated into logical subdivisions
> > that do not require significant communication of inputs and outputs.
> > ... You win much more by figuring out the optimal computer architecture
> > and building it locally, than you do by colonizing the nearest stars.
> I'm not going to take your word on this. I want to see some analysis
> before I'll be persuaded.
Agreed, this is just my gut feeling. I don't know of how large the space of computational problems that can be subdivided into "survival-related" and "theoretical-interest" really is. But the propagation delays to nearby (other-stellar) supercomputers is so large that there is huge pressure to solve them for oneself.
It comes down to a fundamental question -- at what point does anticipational computation trump chaos (of the known universe).
If your survival is guaranteed, as seems to be likely, then your problems become what are the "difficult" computational problems and what are the optimal architectures to solve them.