John Clark, <jonkc@worldnet.att.net>, writes:
>
> In the current issue of Wired magazine is an article I did not expect
> to see, the author expresses great fear over Nanotechnology, computers
> smarter than humans, and of course genetic engineering. After expressing
> sympathy for the ideas if not the methods of Ted Kaczynski, the mad
> Unibomber, the author advises "limiting our pursuit of certain kinds
> of knowledge". The really strange thing is not the content, it's pretty
> standard Luddite stuff, the bizarre thing is that the author is Bill Joy,
> chief scientist and cofounder of Sun Microsystems. It's a little sad
> to see an attitude like that in the very heart of Silicon Valley.
This is now the third mention of this article in the last two days.
I have had a chance to read it now, and I was unhappy with it, for
several reasons.
For one thing, it is a one sided polemic. However it is written in a
calm, dispassionate style which somewhat disguises that fact. There is
little development of the advantages that technology could offer, and
considerably more extensive writing about the dangers.
However, even with this emphasis, the dangers aren't really spelled out
very clearly. For robotics, we are given only a quote from Moravec about
how they will inevitably out-compete humans. We have discussed here in
the past the principle of comparative advantage, which gives an economic
rationale for even uncompetitive parties to carve out an economic niche,
so that the outcome is not as one-sided as Moravec (who is no economist)
forecasts.
For nanotech it is the standard grey goo scenario, which we have also
debated at length. The upshot is that it is not that easy to devise an
effective gray goo that cannot be defeated by comparable technology.
For genetic engineering it is any of the scary frankenfood scenarios
now being harped upon by the greens. Joy quotes his friends Amory
and Hunter Lovins, long-time politically active environmentalists,
in opposition to genetic engineering. Their article, available at
http://www.rmi.org/biotechnology/twobotanies.html, is IMO very poorly
reasoned and full of cheap rhetoric.
With this rather sketchy outline of the dangers, Joy fleshes out his
article with a rambling passage about the early days of the nuclear arms
race, with the implication that the world would have been a better place
had the technology somehow been turned over to an international body.
He looks for essentially the same solution for the three dangerous
technologies: "relinquishment". Meaning, you give up on them and stop
working on them. And of course an important element of this has to be
"transparency", i.e. universal monitoring. (Oddly enough he doesn't
mention David Brin, although he manages to drag in the Doomsday Effect.)
There is no discussion of the dangers of setting up such a universal
monitoring system. There is no discussion of the suffering and deaths of
poor people around the world if economic growth is thwarted by preventing
technological development. In short there is no discussion of the risks
and costs associated with his recommended path.
Instead we are told of the advice of the Dalai Lama, who tells us to
conduct our lives with love and compassion for others. We hear about
Jacques Attali, who supposedly inspired Joy's approach to the Java and
Jini architectures, telling us that the highest utopian goal is altruism.
This alone "associates individual happiness with the happiness of
others, affording the promise of self-sustainment." (As the Randians
love to point out, this circular definition is useless; if everyone
adopts altruism, the only way to make someone happy is to make other
people happy. But then you don't know how to make anyone happy unless
you already know how to make someone happy. It's a philosophy without
a foundation, and in practice the only reason it appears to work is
because no one is truly altruistic. Altruism is a philosophy which
would collapse into meaninglessness if everyone truly followed it.)
Given that he literally wants to shift the path of the entire world
economy, he needs to come up with a much better explanation of the
problems and why the solutions we have planned for can't work. In his
discussion of the early days of the nuclear arms race, he makes the
analogy that we are in the same position today as those early nuclear
scientists. But in fact, nuclear weapons have not been the end of the
world as many predicted. Is it right to give up our future just on the
basis of fears?
Last year, Tim May used to post on usenet with a signature line
which read, Y2K: It's not the odds, it's the stakes. But I think
that we all learned that in fact the odds are pretty important too.
The sheer magnitude of the Y2K non-event suggests to me that the odds
of a catastrophe actually were pretty near zero. I didn't see that;
like a lot of other people, I was fooled.
But it seems that Joy is making something of the same mistake. The stakes
are high, yes; survival of humanity, survival of life. But the odds
must be evaluated as well. Every course carries risk, consequences,
and costs. We must look at the big picture, and a one sided analysis
does not do that.
I found the last page of the article to be reminiscent in tone of
Douglass Hofstadter's sanctimonious essay on his decision to join
the nuclear freeze movement of the 1980s. In hindsight this was a
misguided effort, which if it had succeeded would only have enabled
the corrupt and evil Soviet empire to stay intact a few years longer.
(And don't get me started on the tainted science behind the nuclear
winter scenario from the same period, one of the worst examples of
politics masquerading as science, and heavily supported by Carl Sagan,
whom Joy quotes approvingly.)
Like Hofstadter before him, Joy can't quite understand why all the smart
people he associates with, apparently largely tree-hugging Birkenstock
liberals like himself, aren't seized by the need to Save Humanity
by adopting the Grand Project that he sees so clearly. He's puzzled,
and slightly hurt. Disappointed, more. He thought humanity was so
much _better_ than that.
Of course, Hofstadter only wanted us to stop building weapons.
Joy expects us to halt economic progress itself. I think it's pretty
clear who's crazier.
A final quote from Joy:
"I believe we must find alternative outputs for our creative forces,
beyond the culture of perpetual economic growth; this growth has largely
been a blessing for several hundred years, but it has not brought us
unalloyed happiness, and we must now choose between the pursuit of
unrestricted and undirected growth through science and technology and
the clear accompanying dangers."
Fractured syntax aside, Bill Joy as a co-founder of Sun Microsystems is
probably worth hundreds of millions of dollars. He is sitting awfully
pretty to be calling for a halt to economic growth. Let him give away
his vast wealth, and go to live as a subsistence farmer in Zaire for a
few years. Then he will be in a better moral position to lecture us on
the need to stop growth.
I haven't had time to even mention the scary pictures which decorate
the article, genetically engineered foods, nuclear explosions (lots of
those), Strangelovian equipment, biohazard suits, a boy swimming in a
polluted lake. It all adds up to technology out of control.
In short, it's a powerful but one sided argument, strong on attack but
short on defense. Rebuttals and counter arguments are certainly possible.
But in practice, the ideas seem so outlandish and impossible to implement
that it seems unlikely to go anywhere. (Of course, that's what I thought
about David Brin as well, and almost every day I read something more
about how good it would be to get rid of privacy once and for all...)
Hal
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:07 MDT