"Billy Brown" <bbrown@conemsco.com> writes:
> We could burn a lot of electrons arguing about where the borders of the
> danger zone fall - that's why I didn't offer any numbers. IMO, we actually
> have a zone of steadily increasing danger as the rate of change increases.
> Right now society doesn't have time to fully adjust to new developments, but
> we at least have time to wonder what to do about them. More importantly,
> there is time for the feedback mechanisms of free markets and various social
> institutions to have some effect on the course of events. The faster the
> rate of change, the less chance there is for such forces to operate, and the
> more unstable things become.
Yes. This is where we transhumanists can do a lot of good by spreading our more useful views, and help create systems for very quick adaption in society.
(on the other hand, fast adaption also causes fast changes - the quicker the markets become, the more unpredictable they become).
> > I don't think there is an upper end to the danger zone, simply because
> > I regard the "...and then the owners of the technology take over the
> > world" solution only work if they are *far* ahead of everyone.
>
> I agree. However, you can generate a scenario where that would happen if
> you mix the right assumptions together. What we need is:
>
> 1) automated engineering is very easy
> 2) nanotech design is manageable
> 3) computers are very fast
> 4) nanotech is very hard to actually implement for some reason
>
> That creates a situation where you have designs for very advanced devices
> that no one can build. The first group to get an assembler can use it to
> implement those designs, giving them a huge instant jump in power.
> Ordinarily we would expect other groups to duplicate the feat and catch up,
> but nanotech also lets you make faster computers. So, the leading power
> whips up a huge mob of supercomputers and sets them to work designing even
> better hardware. Their computers will be on a faster improvement curve than
> anyone else's, so no one can catch up until they hit the limits of what is
> possible.
Would they? Note that somebody has to pay for all this (by assumption expensive and hard) research. And they will want a return on their investment. Having the nanotech advantage would be a wonderful investment, but they would have to divide their resources between profiting from the breakthrough and going further. Going further is fairly easy and cheap (by your assumptions) so most resources will be used for applying the new technology.
There is some kind of unsaid assumption in the scenario, and that is that this is the end of the story. But there will be a second group doing the breakthrough fairly soon (helped by the knowledge that it is possible and the hints they will get by looking at the first group), and soon there will be many others with the same capabilities. Here comes the assumption Eli and others seem to make, that the first group will try to monopolize the technology totally. This is a rather shaky assumption, since it assumes that we can predict exactly what the originators do. But in reality the nanotech breakthrough in this scenario will be part of a complex web of economics, social interactions, politics and technology. Would the board of directors of 3M decide to take over the world, in a literal and very military sense? What if somebody dont likes it and leaks the assembler? And so on; interesting to think of, definitely worth writing a few sf novels to explore it, but hardly anything you should base policy on.
In fact, if the acceleration beyond the threshold quickly leads to the physical limits, the difference between the first group to pass and the others will vanish. The only thing that would remain is whether they got other advantages, like an entrechned market position, successfully monopolized the technology or immediately began to develop something different the first technology couldn't give.
> Now, I don't think that could actually happen, but that's because I think
> assumptions are contradictory. You can't get computers fast enough to
> design smart matter and utility fog unless you are already using less
> advanced nanotech to build them. I also don't think you can design
> computers more than a few generations in advance of the ones you already
> have, for essentially the same reason.
Another possibility is the seamless transition - Moore's law seems to be pointing to a nanotech transition around 2020, around the same time others predict it too. So it could just be a quiet shift in technology.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y