Re: Review of Vinge's Deepness in the Sky

Robin Hanson (hanson@econ.berkeley.edu)
Tue, 02 Mar 1999 13:30:51 -0800

On 2/27/99, Hal Finney wrote:
>With the entire story taking place in the Slow Zone, technology is capped
>at a level not too far above our own. Super-intelligence and even human
>level AI is impossible; software projects inevitably bog down in their
>own complexity before something as complex as human AI can be reached.
>Medical science allows people to live perhaps 300-500 years at most
>(although with cryonic suspension their lives can extend over much
>longer periods). Nanotech remains a dream which was never fulfilled.
>
>Most civilizations are trapped in a cycle of boom and collapse, similar
>to Niven and Pournelle's Moties. The Qeng Ho help to moderate this
>effect somewhat as they carry technology between star systems in their
>ramscoop powered vessels. ... All in all I found it a depressingly
>limited future. ... if you're looking for the kind of grand-scale
>ideas which Vinge provided in Fire, I don't think you'll find them here.

I just finished the book moments ago. It was a fun read, if a bit long.

I find the idea of exploring a "depressingly limited future" very interesting and relevant. If one could paint a detailed enough picture of just how things could be so limiting, that could help us better evaluate our chances of being slowed down by such limits, and perhaps help us avoid such scenarios.

So does Vinge present a plausible detailed picture? I'm not sure. Limits to software complexity were plausibly presented, and so I could buy the lack of AI or advanced automation. Though the story doesn't say so, I suppose complexity limits could also explain the life extension limits described. The failure to make substantial progress in physics seemed more arbitrary, though I suppose very subtle effects might remain hidden for millennia until the right clues were presented.

More puzzling was the failure to achieve anything like nanotech. I suppose complexity limits could be behind this. In one case, a system with "a technology as high as Humankind ever attained" achieved something close to nanotech, and the dust our hero bought from them became a core element of all trader's starships, and the key to our hero's power. But I don't recall that system being noted for any other abilities to handle complexity. (It was particularly bad at life extension.)

Perhaps most puzzling is the failure to use any significant fraction of the resources at each solar system. Human populations around a star are never more than "billions", and we see nothing like wholesale conversion of asteroids and comets. "Sooner or later [each system] ossified and politics carried it into a fall."

These falls are very severe, often requiring re colonization from the outside, and otherwise seem to require rebuilding from scratch. This is much more severe than the fall of the Roman Empire, for example. Powerful weapons of war might explain this, but the worst weapons we see in the story are nukes. Are nuke wars really enough to destroy civilizations so thoroughly?

Also the numbers don't seem to add up. One big meeting described had ships traveling from 300 star systems, each traveling between 100 and 1000 years, and "perhaps a third ... would have fallen from civilization in the time it took for voyage and return."

This suggests an expected civilization lifetime of 500 to 5000 yrs (exponentially distributed) after achieving starflight. But at current population growth rates even a 500 year lifetime gives a median population growth factor of 1000 between initial starflight and fall population. And at current economic growth rates the economy would grow by a factor of 10 billion.

Now maybe growth rates were slower, though folks in the story didn't bother to note any dramatic differences in growth rates between places they'd traveled. And even if the economy only had a doubling time of one tenth the expected lifetime, then in a thousand human systems more than half of human economic power should reside in the single most advanced systems among them. Yet such a vast concentration of power was not noted in the story.

The frustrating thing about using science fiction to think about these issues is not knowing whether the author thought they had good reasons to expect things described, whether they were just choices to make the story easier to tell, or whether the author just didn't even notice them. I suspect one big problem is that Vinge doesn't really believe in these limits.

Robin Hanson

hanson@econ.berkeley.edu     http://hanson.berkeley.edu/   
RWJF Health Policy Scholar             FAX: 510-643-8614 
140 Warren Hall, UC Berkeley, CA 94720-7360 510-643-1884 after 8/99: Assist. Prof. Economics, George Mason Univ.