Eliezer S. Yudkowsky writes:
>> ... Steady state growth rates could easily rise by an order of magnitude
>> or more. But with steady growth, wages and per-intelligence consumption
>> could fall as fast as computer prices do.
>I shall leave aside my personal objections, since I think you may find
>Gubrud's arguments more convincing;
>"... any military production system fully automated by advanced artificial
>intelligence, would lead to instability in a confrontation between rough
>equals. ... feel pressured to preempt, ... close contact between forces at
>sea and in space would give an advantage to the first to strike."
>In the event that you've read it already, which seems probable, I would just
>like to say this: The flip side of rapid progress is incredibly destructive
>wars. And if you think the currency meltdown is causing global
>destabilization, the differential equations you blithely toss around would
>shatter national economies like glass. Since I don't believe a Weak
>Singularity is probable, I can say dispassionately that the Weak Singularity
>your paper models would probably end in the violent death of a significant
>fraction of mankind.
I'm not sure what your "objection" is here. I have read Gubrud and had a long email conversation with him. His basic thesis seems to be that new technology induces military instability, an argument that is not particular to nanotech or AI. I don't find this terribly convincing - technology is changing a lot faster now than a thousand years ago, but it's not clear wars are worse. But even if Gubrud is right, how is that an objection to my analysis? If machine intelligence appears, it is a big new tech, so Gubrud predicts wars. I predict changes in growth rates, wages, and population. How are these predictions in conflict?
firstname.lastname@example.org http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614