Re: Transhuman Beach Party

Robert J. Bradbury (bradbury@www.aeiveos.com)
Tue, 7 Sep 1999 20:27:13 -0700 (PDT)

On Tue, 7 Sep 1999, den Otter wrote:

> ... commenting on Greg's comments on
> > -The Transhuman Beach Party- (a very short story)
>
> Afaik, the Singularity = superintelligence, and the consensus currently
> seems to be that AI will beat IA (and uploading) in the race for SI.

I'm not sure that this is the "consensus". If we got the exponential growth of nanoassembly tomorrow, we would have *neither* (a) the designs to build stuff, nor (b) AI to populate the nanocomputers.

It is distinctly possible, and IMO *probable* that we will get nanoassembly and molecular nanocomputers without an abundance of nanodesigns or any comprehension of how to make an AI. The "art" of engineering atomic scale structures and AI science would have to get on curves similar to that of DNA sequencing of the last 5 years, for the nanoassemblers to have lots of stuff to build or for us to have an AI to populate a nanocomputer. I don't see that happening right now.

Eliezer may pull the AI rabbit out of his hat, and maybe that will solve the nanoscale design problem but that involves a sharp discontinuity from current trends. If we want to discuss probabilities, it is more probable that AI, robotics, intelligent agents, etc. will step by step reproduce functions of the human mind (playing chess, driving a car, speech, speach-to-text, speech command, speech comprehension, OCR, OCR with document comprehension, etc.). [I'm using "comprehension" loosely here, more in the sense of grammatical comprehension, rather than concept comprehension.] Engineering libraries of nanoscale designs will slowly be built up, verification programs will be created, etc., just as has happened in the semiconductor industry. These things *will not* happen overnight.

> This means, almost by definition, an extremely uneven distribution of
> power and very rapid change. How do you plan to deal with such a
> scenario?

If Zyvex gave you a nanoassembler tomorrow, there would be little power shift and little rapid change. There would be a huge discussion about what to build, who would do the designs, how to distribute the wealth, etc. The wealth doesn't stay localized, as soon as there is "proof of concept" that directed nanoassembly is possible, every country in the world will devote huge teams of engineers to reproducing the results. If you had to "opt-out" of the patent treaties I think that there is a good chance that countries would probably do that.

>
> What would you do if the Singularity came *tomorrow*? What would
> *any* of us do, except holding our breath?
>
If I'm working in the lab, and I happen to see a clear path to a nanoassembler and realize that it is going to make my "company" rich but not me, am I going to keep working for the company? Unlikely. I'm going to go to the nearest investment banker and say -- Here is a path to building a nanoassembler, would you kindly give me $100M to build it?

You would rapidly get a Balkanization of development efforts which is probably a very good thing. You can only argue for a centralization of the wealth created by the singularity, *if* you can argue that there is only a single path to nanoassembly and a single individual/group/ goverment can keep it under their control. The only way I see that happening is, *if* someone gets a nanoassembler *and* has the designs for the machinery to eliminate everyone else on the planet in such a way that it was impossible for them to mount a reasonable defence. Knowing what I know about the weaknesses in nanotechnology and the fact that the knowledge of those weaknesses is fairly well distributed at this time, I'm very doubtful that that could occur.

> Yes, wealth is very important. However, just being "well off",
> "moderately rich" or even "rich" won't probably be good enough.

Why? I've already pointed out that everyone becomes "effectively" rich ~5 years after robust nanotech becomes available. When nanotechnology becomes clear on the horizon, we are going to undergo a very interesting shift in wealth. What is the worth of a skyscraper when a large number of nanobots can build the equivalent in a very short period of time (even *if* you could convince some silly person to leave his mansion and actually go to the office)? What is the wealth of a company that may find its market eliminated by open source designs? What is the wealth of government bonds when every individual in the country can "leave"? The problem with "wealth" is that you have to keep it someplace! If the traditional wealth holdings undergo a substantial deflation in value, those who see it coming and position themselves properly *will* be riding the wave.

> To get a prime spot on the wave, you'll have to out-paddle a whole lot
> of other skillful surfers, some of which will be more than willing to
> push you under in order to get to the wave first.

This is where the wave analogy fails. There is no "prime spot" on "the" wave. Nanotechnology enables an expansion of dimensionality so that there are now *many* waves. The human mind does not have the ability to focus on them all. Since the barriers to inventing new waves have been lowered, we will have trouble keeping track of them all. It will be like the growth of WWW today only it will include both the physical and intellectual realms.

> Hard cash will buy you the bulging, rock-hard muscles you'll need to
> stay ahead.

Hard cash? You mean those pieces of paper backed by the U.S. government? Hmmmm... I'll take a nanoassembler over a stack of those pieces of paper any day. Probably because the nanoassembler lets me make as much of the hard cash as I want... :-) Oh, you meant gold? That stuff you can molecularly sort out of the oceans? Oh, I know, you really meant diamonds... :-)?

> As you'll be competing with large companies, governments, organized crime,
> dictators, terrorist groups and other folks with no shortage of funding,
> you better get rich indeed.

Large companies? You mean those entities that are left after open source designs eliminate most of the known markets? Governments? You mean those empty buildings left left when everyone has moved to Oceania? Organized crime? Those people who want to sell me something illegal I can build myself? Dictators, terrorist groups? Oh, those people who want to control the behavior of people who can easily relocate themselves where they don't have any influence?

I would have to say that most of these concepts rest on ideas that will be outmoded by the singularity. These organizations will be most threatening only if they adopt a luddite stance due to the potential for the singularity to eliminate them.

Also, companies, governments, mobs and terrorist groups will not easily be able to turn themselves into a unified SI. The problem of how to effectively *merge* minds will be one of the most difficult to solve and so may not develop until many years after nanotechnology develops. All you can effectively do in the beginning is upload separate minds, creating some kind of hive mind, and then allow them to pool their intelligence in the same way we pool ours here, though with a much higher intercommunication bandwidth. But if their group can do it, it is likely that your group can do it to. As I believe was pointed out in another thread, we really don't know how to evolve or dramatically enhance "intelligence" (as compared with say memory). If its a trip, stumble & fall, pick yourself up and try again process (as seems likely), then the landscape to be explored and large number of dimensions may make this a relatively slow process, though much faster than what we are currently used to.

It is worth remembering that it still takes hundreds of years to restructure the solar system into an optimal SI computing architecture. It doesn't happen overnight.

>
> That's why essentially some sort of transhuman cooperation effort is
> needed, as the chances that any individual could get very rich, and
> keep track of things at the same time, are very small indeed.

Agreed. As "we" have devoted more thought to the problems, we (collectively) may have more insight into how to chart a path through the swamp that will minimize people drowning.

> It's a good start, but if we really want to stand a fair chance, I'm
> afraid we'd need a much bigger and much better coordinated effort. Being
> aware of something is one thing, but actually having access to the
> technology is something rather different...

Gee, ABC discusses uploading, CBS discusses CR, cryonics & immortality. Are you suggesting that the Seventh-day-Born-again-Extropians, should be banging on peoples doors and preaching in the streets about SI conversion? There is a little matter of inertia to deal with. If we allow the ideas to slowly crystalize in the minds of the person-in-the-street, and they can see that the benefits (even thought they upset the status quo) clearly outweigh the risks, then we will have an army of anti-luddites.

>
> Oh yes, me too. Nothing to lose, might as well give it a shot...
>

To me, the singularity looks slower to develop than most people think and therefore I have time to wax my surfboard. The parts of the singularity I can see involves people, and since I understand people at least to some degree, I'm not afraid of it.

I am *much* more concerned about what I may not see -- that early on someone creates a self-evolving amoral AI and that that single entity somehow gains access to nanotech and turns itself into an amoral SI. In that case it may well be bye-bye-birdie time. I think this is a strong argument for widely distributed nanotech/AI research since in that case the multitude of paths make it likely that the concepts and technologies would exist to mount a defense against rogue-Is.

In the end, its not whether you win or lose, its whether you enjoyed the ride.

Robert