On Tue, 14 Mar 2000, Zero Powers wrote:
> I am glad to see this point of view being aired, if for no other reason than
> that it will spur further debate. I am as pro-way-out-tech as anybody on
> this list. But I do share Joy's concern that our pursuit of this tech is
> not without *significant* dangers.
Perhaps it is his way of getting more attention on the matter. Joy
is certainly smart enough to write a piece that does not reflect his
true feelings. The fact that there *are* dangers is very well known.
Those who are not Senior Associates at the Foresight Inst. (or who are
but haven't registered at the SAM site) do not know that there is a draft
copy of the "Policy on Nanotechnology" document. One key point is:
- Replicators must not be capable of replication in a natural,
uncontrolled environment.
That policy, if followed, removes Joy's "unintentional" accidents argument.
Yes, we can get into long discussions about how "unenforceable" it is,
but the point is the same one I made with "almost everything" machines.
The truth is that we have replicators replicating out in the uncontrolled
environment *now*. If anything nanotech may make the real world *safer*.
Turn the argument on its head -- would you rather live in a world where
everything known and engineered for maximum safety (you can have it as
dangerous as you like in the virtual world) or would you rather live
in a world where the things that creep up on you in the night can and
do kill you?
His argument that we don't know how to produce reliable systems we have
discussed in previous threads re: trustability. The current research
into secure transactions and reliabile nets *is* creating the body
of knowledge on how to engineer things that are fault tolerant and
don't cause excessive harm when they do break. (Witness the recent
landing of the plane in San Francisco with one wheel up). Do we
get it right all of the time? No. But we seem to keep improving
our skills with time. As Moravec points out we will have the computing
power to run simulations to see if there are potential problems before
we let things "out".
The terrorism/mad-man letting loose nanotech horrors doesn't seem
to probable because the motivations for it mostly disappear when
human forms have all of the needs fulfilled. You still have the
Sadaams and Abins to worry about but fortunately they are few
and far between and it will be much harder for them to recruit
a nano-terror cult in the world we envision.
> I know there are people more
> technologically and scientifically literate than me who feel that the
> potential dangers are not worrisome enough to warrant stepping back long
> enough to make sure that what *can* be done *should* be done.
The point is we *are* doing those things. The Foresight Inst. as
well as people on this list and many other places actively work
on these problems. You have to keep in mind there *is* a cost
to slowing down. I think the rate of death from hunger *alone*
is equivalent to a 747 loaded with people crashing into a mountain
*every* hour. You just don't hear about it on the news every night.
The status quo has got to go.
> That alone is
> enough to convince me that we cannot be assurred of smooth sailing as we set
> out into these waters.
The *current* waters are filled with danger as well. The only difference
is that you think you know about them and can avoid them.
> But it certainly cannot hurt to debate and
> debate and debate these issues until we are blue in the face.
We will do that. I expect that there will be many conferences like that which
defined biotech hazards and research protocols at Ansilomar. We can have
those because the discussion is *open*. In contrast to the situation
regarding the development of atomic weapons, that Bill uses for examples
of why we shouldn't develop GNR (genetics/nanotech/robotics).
>
> Kurzweil seems to give us a 50% chance of avoiding our own extinction.
>
Kurzweil is a pessimist (his self-image as an optimist not withstanding).
I'd put our chances at more like 80-90%. I actually worry more about
near earth asteroids than I worry about bio/nanotech. I worry the *most*
about self-evolving AI's with no moral code. Kurzweil [& Joy] are also
way conservative on when we get human equivalent computing. The hard
part will be whether good AI takes 5 years or 20, but that's in Eliezer's,
Doug's and a few others hands.
> Personally, I don't like those odds. Heads you get immortality and
> unlimited wealth. Tails you get global sterilization. I need a little
> more convincing before I vote to flip that coin.
The article will have one interesting side effect is that it will probably
function as a booster for the Mars Camp. Of course the Mars Camp
doesn't realize that if they flee to Mars because of the imminent
development of nanotech Robots on Earth, the nanotech Roborts will
probably be waiting for them when they arrive...
It seems Joy (and Kurzweil) clearly haven't thought things through
completely. The colonization of space is clearly silly in a universe
that may be populated by Matrioshka Brains.
Robert
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:13 MDT