Very true. The power of evolutionary computing lies in its blind speed.
Let's say we have a million-organism brew that evolves for a thousand
generations. Now let's say we have a hundred thousand programmers, each of
whom chooses an organism out of the stew and redesigns it. The latter
situation will proceed more slowly than the first, and be vastly more
expensive, but it will also be far more powerful and capable of reaching
loftier goals.
Evolution - in at least one way of doing it - is a vast, random, search tree,
with each branch growing a number of leaves that depends on how profitable
that branch is right now. An AI might use the same tree, but with
intelligently redesigned versions of each leaf. Ultimately, guided evolution
and design converge to the same cognitive ability. How do we ourselves ask
the "What-if?" questions that are key to creative design? Some kind of
immense semantic search tree - perhaps guided, perhaps not - would be my guess.
> > As an example, assume the worst scenario happens and an escaped badly
> > programmed dishwashing nanite
>
> This does not seem to be the worst scenario to me. The worst
> scenario would be something deliberately built to eliminate all
> life. (It would be even worse if it was designed to torture it.)
Very, very true. A lot of people on this list seem to lack a deep-seated
faith in the innate perversity of the universe. I shudder to think what would
happen if they went up against a perverse Augmented human. Field mice under a
lawn mower.
> > It will spread with the speed of an bacterial
> > infection, and be quite deadly.
>
> Why couldn't it spread much faster? Bacteria are limited to some
> specifid kinds of hosts, the nanites could attack any organic
> material and many inorganic ones too.And if they were deliberately
> designed, they could transform themself to missiles after they had
> eaten enough, and then swoosh accross the seven sees in a very short
> time.
I'd actually think that the infection would spread in multiple waves. The
first wave might be small pellets travelling at hypersonic speeds, or even
lightspeed computer viruses travelling to existing replicators. The second
wave would be a softening-up wave that would reproduce very quickly and at
high speed, taking small bites out of things and leaving third-wave
replicators behind. The third wave would be immensely destructive, the actual
gray goo. The fourth wave, if any, would assemble things out of the raw
material thus produced.
Note that these don't need to be different types of replicator. Each "wave"
could be a different mode of action, evoked by circumstances.
> >Of course, as soon as this becomes
> > known there will be several groups who quickly enclose themselves in
> > their already built underground bases (Cheyenne mountain is an
> > example that exists today, and with this level of nanotech I think
> > there will be more "nanosurvivalists" waiting for the disaster).
>
> They might have to go there pretty quickly, like after a nuclear
> alert. They will have to make sure that not a single little nanite
> finds a way in. They will have to hope that the nanite doesn't eat
> rocks and cement. They will have a limited time to figure out how to
> use their very limited resources to eliminate a enemy that already
> forms a think deadly layer over the whole earth. They have to hope
> that the nanites weren't deliberately designed to pile up explosives
> on top of their bunker and blow it all away. --Yes, they *could* make
> it, at least in a Hollywood movie...
I agree, except that they'll be using nukes, not ordinary explosives. Or the
nanites could surround the entire compound, lift it into space, and toss it
into the Sun.
> > So
> > while the biosphere turns to dishwashing goo there will be people
> > around who are very motivated to find a weapon against it, for
> > example a tailored "predator nanite" or something similar. It doesn't
> > appear likely that the goo could wipe out all the people (just a very
> > large amount of them)
>
> I'm sorry, but it does seem to me a bit like wishful thinking (and
> reading to much SF?). I think I will call this the
> go-hide-in-your-basement solution to the antiproliferation problem.
Same here. It's immensely easier to destroy than create. You couldn't "hide"
from a predatory nanite. You could slow it down, keep it from getting to your
bunker, surround it with a continuous wall of nuclear flame, and then make
your escape into space and blast the gooey Earth into bits... then try to
rebuild civilization in the new asteroid belt.
But most likely, the nanite would get into your bunker before blossoming. And
then you're dead. (Unless you're an upload, but it seems to me that that
issue is for the uploads to worry about. While this very discussion, on the
other hand, could be read by some genius at Foresight who's made a
breakthrough we haven't been told about.)
> The remarks you made seem predicated on the assumption that the
> nanites will be comparable to a particularly virulent biological
> plague. Suppose that this isn't true. Then the only method for
> avoiding disaster in a society where there are many independet
> individuals with full technological access is to have some kind of
> active nanotech immune system. It seems to me that the reactions
> towards higher binding energy would always have an advantage, so in
> this situation there would only be two ways of maintaing status quo.
>
> The first is if all the material were already very close to its
> lowest energy state, so that no more reactions were economical. Does
> anyone have a good design for a computer that would work under those
> circumstances (we would all be uploads then).
>
> The second is to have the immune system quickly eliminating any
> plagues, and it could use the fact that it has access to more energy.
> A good design for this?
>
> Aha, I just thought of a third way. The independent folks could all
> live in a virtual reality that were designed so they could do no
> major harm. They would have no access to the real reality, which
> would be ruled by a single entity.
Yeah, I noted - and rejected - that possibility, some time ago. My theory was
that if you said you were going to do THAT, nobody would help you do it. It
is true that nanotechnology will most likely be controlled by a single entity
or a small group of them.
"Who will guard the guardians?" - maybe nanotechnology would give us a perfect
lie detector. Nanotechnology in everyone's hands would be just like giving
every single human a complete set of "launch" buttons for the world's nuclear
weapons. Like it or not, nanotechnology cannot be widely and freely
distributed or it will end in holocaust. Nanotechnology will be controlled by
a single entity or a small group... just as nuclear weapons are today.
If that entity is benevolent and Libertarian, utility nanites would be
released as they were programmed - to eliminate hunger, starvation, old age,
death, etc. The world would remain much the same, except most forms of
physically based pain and coercion would be eliminated. Other utilities might
be more flexible. No utility will give access to the forbidden molecular
level, but many might give access to higher levels. People might be able to
edit their synapses or their tissue-level body structure. (The former
scenario might result in Singularity in fairly short order.)
If that entity is benevolent and authoritarian, we'd probably wind up with the
scenario you just described. One guy stays in real life, the rest play tennis
in VR. Life goes on "hold" while everyone waits for the King to do a Singularity.
If that entity is malevolent, immediate and indiscriminate use of nuclear
weapons would be free humanity's only hope of survival. Humanity can survive
nuclear war and fallout. It cannot survive molecular warfare. Just as
nuclear war would utterly destroy the geopolitical balance made of governments
made of humans, so molecular warfare would destroy all humans made of tissues
made of cells made of molecules.
I should note that none of this is proposing a "Divine Right of Kings"
situation. It is not justification for the King retaining all power. Power
should still never be in the hands of one person. It's just that in this
case, the King wouldn't have any choice. The King remains bound by the usual
ethical restraints, and cannot impose any form of coercion.
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.