Re: Luddites are everywhere!

From: Zero Powers (zero_powers@hotmail.com)
Date: Wed Mar 15 2000 - 02:38:16 MST


>From: "Robert J. Bradbury" <bradbury@aeiveos.com>
>
>On Tue, 14 Mar 2000, Zero Powers wrote:
>
> > I am glad to see this point of view being aired, if for no other reason
>than
> > that it will spur further debate. I am as pro-way-out-tech as anybody
>on
> > this list. But I do share Joy's concern that our pursuit of this tech
>is
> > not without *significant* dangers.
>
>Perhaps it is his way of getting more attention on the matter. Joy
>is certainly smart enough to write a piece that does not reflect his
>true feelings.

What makes you think he might take time from what is probably a very busy
schedule to write an essay about something he does not believe? I also
heard an interview of him on the radio tonight and he certainly seemed
convinced of the concerns he expressed in the essay.

>The fact that there *are* dangers is very well known.

Perhaps the dangers are well known by Senior Associates at the Foresight
Inst. and by the members of this list. But I doubt that most of the other
readers of Wired have given much thought to these things.

>Those who are not Senior Associates at the Foresight Inst. (or who are
>but haven't registered at the SAM site) do not know that there is a draft
>copy of the "Policy on Nanotechnology" document. One key point is:
>- Replicators must not be capable of replication in a natural,
> uncontrolled environment.

That's a good policy. I like that policy. But it will take a *lot* more
than a well stated policy to convince me that we ought to flip the GNR coin
at this point.

>That policy, if followed, removes Joy's "unintentional" accidents argument.

You mean that policy if followed everywhere and always, without exception.
That's an awful big *if* you're tossing about.

>Yes, we can get into long discussions about how "unenforceable" it is,
>but the point is the same one I made with "almost everything" machines.
>The truth is that we have replicators replicating out in the uncontrolled
>environment *now*. If anything nanotech may make the real world *safer*.
>Turn the argument on its head -- would you rather live in a world where
>everything known and engineered for maximum safety (you can have it as
>dangerous as you like in the virtual world) or would you rather live
>in a world where the things that creep up on you in the night can and
>do kill you?

Sure a world where everything is engineered for maximum safety sounds good
to me. It just seems to me that that is a lot easier said than done.

>His argument that we don't know how to produce reliable systems we have
>discussed in previous threads re: trustability. The current research
>into secure transactions and reliabile nets *is* creating the body
>of knowledge on how to engineer things that are fault tolerant and
>don't cause excessive harm when they do break. (Witness the recent
>landing of the plane in San Francisco with one wheel up). Do we
>get it right all of the time? No.

Now you have hit the nail on the head! The fact is we usually don't get it
right. In fact we rarely if *ever* get it right the *first* time. Version
1.0 of any invention is usually very, very buggy. The problem with GNR is
that unlike a computer or network failure, you don't simply get to reboot
and try again. The first GNR mistake could very well be our last.

>But we seem to keep improving
>our skills with time. As Moravec points out we will have the computing
>power to run simulations to see if there are potential problems before
>we let things "out".

Even computer simulations can fail to disclose unforeseen problems. I'm sure
computer simulations were run prior to the Challenger disaster and prior to
the various Mars probe failures. The fact is the only sure fire way to see
if we've got it right is to build the device in realspace, let it fly and
see if it works.

>The terrorism/mad-man letting loose nanotech horrors doesn't seem
>to probable because the motivations for it mostly disappear when
>human forms have all of the needs fulfilled. You still have the
>Sadaams and Abins to worry about but fortunately they are few
>and far between and it will be much harder for them to recruit
>a nano-terror cult in the world we envision.

I wish I could be so optimistic. There may be only one Saddaam and only one
Osama bin Laden. But there are *millions* of people who hold similar
irrational and religious beliefs. Giving them long life and unlimited
wealth may very well not assuage their belief that God has told them that
the "Satanic" new world order must be destroyed at all costs.

> > I know there are people more
> > technologically and scientifically literate than me who feel that the
> > potential dangers are not worrisome enough to warrant stepping back long
> > enough to make sure that what *can* be done *should* be done.
>
>The point is we *are* doing those things. The Foresight Inst. as
>well as people on this list and many other places actively work
>on these problems. You have to keep in mind there *is* a cost
>to slowing down. I think the rate of death from hunger *alone*
>is equivalent to a 747 loaded with people crashing into a mountain
>*every* hour. You just don't hear about it on the news every night.

Hmmm. Hundreds of people (on an over-crowded planet) dying every hour,
compared with the risk of a global gray goo problem. Can you guess which
one I'd pick?

>The status quo has got to go.

Surely it does. But at the risk of jumping from the frying pan into the
fire?

> > That alone is
> > enough to convince me that we cannot be assurred of smooth sailing as we
>set
> > out into these waters.
>
>The *current* waters are filled with danger as well. The only difference
>is that you think you know about them and can avoid them.

No, the difference is that the danger of the current waters is more of the
same: scattered famine, sickness and death. The danger of the GNR waters is
*extinction.*

> > Kurzweil seems to give us a 50% chance of avoiding our own extinction.
> >
>Kurzweil is a pessimist (his self-image as an optimist not withstanding).
>I'd put our chances at more like 80-90%.

Is there any reason I should put more credence in your guess than
Kurzweil's? He, after all, pessimist or no, has a fairly impressive record
of predicting the future.

-Zero

"I like dreams of the future better than the history of the past"
--Thomas Jefferson

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:15 MDT