1. Provided that technological research continues, it is likely that
ab initio molecular nanotechnology, including general assemblers,
will eventually be developed.
[slight disambiguation]
2. Hostile attack goo cannot be allowed to gain a sizable foothold;
therefore there must not be any sizable global area unprotected by an
immune system.
[This leaves open the possibility that there may be
independent complementary immune systems, either side-by-side or
overlapping. The main reasons for 2 was apparent in the discussion we
had about destruction by induction, the possibility of buiding huge
quantities of TNT, and the inadeqacy of Anders' immune system
proposal to deal with these problems. 2 does not say anything about
whether an isolated local immune system could be efficient
against grey goo.
Given an island vs. sea battle, and a certain minimum technology
level on both sides, the sea will win, whether the "island" is a
malevolent spore or a city.
(I think that Eliezer share this position and that a "not" has got
lost from his statement:
>1b: Personal immune systems are feasible as a defense against
> death goo.)]
3. In the absence of a global immune system, if most people could
make their own nanotech machines then all human life on earth would
soon become extinct.
[We leave open the possibity that subterranean bacteria may hang on,
if only because nobody bothers to try to exterminate them. Also note
that we leave open whether the global immune system would be
unique and monolithic or otherwise. The point is that the vast
majority of the earths surface (and crust?) has to covered by one
immune system or another. ]
4. In the absence of ethical motives, the benefits would outweigh the
costs for a nanotech power that chose to eliminate the competition or
prevent it from arising, provided it had the ability to do so.
[Hal agrees with this, but several people said they don't. In at
least one of the cases, the disagreement crept in outside the claim
that is made in 4. 4 does not say that the first nanotech power
*will* eliminate the competition (altough I happen to believe that
that is rather likely). Only that in the absence of ethical motives
it would be rational for it to do so. But ethical motievs need not be
absent and it's decision making (democratic?) need not be perfectly
rational. Also, I use goals as the principle of individuation,so the
nonopower may have a rich internal structure, and it may encompass
many nations with similar aims, for example. With this clarification,
I don't see how anybody could disagree with it, in the light of the
cost-benefit analysis I posted a few days ago.
The comparative advantage objection is mistaken, as Hal explained:
>In particular, the doctrine of
>comparative advantage doesn't seem relevant. You aren't going to
>lose access to the resources represented by the competition; rather,
>you are going to subsume those resources and gain greater control
>over them.
The lost-information objection I find completely unconvincing.
Carl Feynman says he disagrees with 4 but hasn't been been posting
much because of an ear infection. I hope he will get better soon. I
want to hear what he has to say about my cost-benefit analysis.]
5. Unintentional gray goo is a relatively mild danger compared to
attack goo.
[Added]
Anders wrote:
>I think this kind of consensus-description is a good idea, although I
>have the feeling that Nicholas has biased it a bit in his direction
>rather than the consensus perceived by (say) me.
Well, I should have said that what I was searching for was a
*consensus sapientum* (a consensus of the wise), where the sapienta
are defined as those who by and large agree with Mr. Bostrom ;-) .
Nicholas Bostrom
London School of Economics
Department of Philosphy, Logic and Scientifc Method
email: n.bostrom@lse.ac.uk
homepage: http://www.hedweb.com/nickb