Re: Re: Re: Goo prophylaxis

Nicholas Bostrom (bostrom@mail.ndirect.co.uk)
Wed, 3 Sep 1997 14:33:24 +0000


CurtAdams:

> In a message dated 9/2/97 7:15:10 AM, Nicholas Bostrom wrote:
>
> >CurtAdams wrote:
> >
> >> >Cars are optimised.
> >>
> >> In no sense. They get substantially better every year even after you
> >> discount for technological improvement. In what sense could they possibly
> be
> >> optimal at any point?
> >
> >(Fairly optimal.) In the sense that it would be *much* easier to
> >build something on four wheels that moves by itself than it is to
> >build a vehicle that can compete on the open market today. The
> >context was this: Carl said that a simple self-replicator would
> >contain about the same amount of information as a car. So some
> >kind of analogy-inference might be made if we know how difficult is
> >it to design a car. Well, how difficult is it? Many highly skilled
> >people have been busy for many decades designing cars, so it seems
> >very hard. But this would be to overlook the fact that the
> >self-replicators we are trying to build need not be optimised in the
> >sense that cars need to be, if they are to be acceptable to car
> >designers. The relevant analogy (a weak one, to be sure) is rather to
> >steerable automobiles on four wheels or something like that; not to
> >car that could be sold today.
>
> Any self-replicator has a much harder job than a steerable car. Based on Von
> Neumann's estimates, even in a tank with semi-processed raw materials, you'll
> need about 250,000 parts in a sophisticated (i.e., well-designed ) system.
> Carl's point (I think) was that that is roughly comparable to a modern car -
> an early car is much simpler. The analogy "simple car"="simple replicator"
> is not correct, in the same way that "simple cart"!="simple car".

Ok, that makes sense. It seems that we have exhausted the power of
analogies now, though. We know that making a nanotech
self-replicator, even given perfect atomic positioning, would be
non-trivial. It would be useful if it were possible to come to a
slightly more precise conlusion, say about the order of magnitude of
the number of "genius-years" required. We would need to take into
account the possibility of developing better computers that could
help in simulations, and other enabling technologies.
Atomic positioning, and atomic monitoring, are prehaps just around
the corner. The STM and AFM achive this in a very imperfect manner,
but some good gripp-molecules to be placed on the tip of the needle
could possibly be developed within a few years. How long will it take
after that untill we have ab initio self-replicators? Ten, fifteen
years? 2015? I suppose that most of the action would happen in the
last few years, when there would be frenetic activity in labs all
over the world. By 2010 we should have some pretty impressive CAD
which would give extra acceleration to the process. Drexler tends to
avoid predictions about when things are going to happen, but if
pressed he says he believes that we will have a general assembler
sometime during the first third of the next century, and more likely
in the earlier part than in the later part of this interval. Hmm. On
my transhuman home page I say that I believe there is at least a 50%
chance that we will have superhuman artificial intelligence within 50
years. Perhaps I should strengthen this to 30 years?

------------------------------------------------
Nicholas Bostrom
bostrom@ndirect.co.uk

*Visit my transhumanist web site at*
http://www.hedweb.com/nickb