John Marlow wrote:
>
> > > In which case, there may not be a damned
> > > thing we can do about it.
> >
> > Yep.
>
> **GAAAK! And you proceed! Or are you simply that
> convinced of our own incompetence? (But then, this
> would actually argue AGAINST proceeding, if you think
> about it...)
Why, oh why is it that the machinophobes of this world assume that
researchers never think about the implications of their research? When I
was a kid, I thought I'd grow up to be a physicist like my father... maybe
work on nanotechnology... I am here, today, in this profession, working
on AI, because I believe that this is not just the best, but the *only*
path to survival for humanity.
> **No--I am, for example, strongly in favor of
> developing nanotechnology, and that is by far the
> biggest risk I can see.
>
> **Building an AI brighter than we are might run a
> close second. That I'm not for.
>
> **The first offers nearly limitless advantages for the
> risk; the second offers only certain disaster. It's a
> risk-benefit analysis.
On this planet *right now* there exists enough networked computing power
to create an AI. Stopping progress wouldn't be enough. You'd have to
move the entire world backwards, and keep it there, not just for ten
years, or for a century, but forever, or else some other generation just
winds up facing the same problem. It doesn't matter, as you describe the
future, whether AI presents the greater or the lesser risk; what matters
is that a success in Friendly AI makes navigating nanotech easier, while
success in nanotechnology doesn't help AI. In fact, nanotechnology
provides for computers with thousands or millions of times the power of a
single human brain, at which point it takes no subtlety, no knowledge, no
wisdom to create AI; anyone with a nanocomputer can just brute-force it.
It's very easy for me to understand why you're concerned. You've seen a
bunch of bad-guy machines on TV, and maybe a few good-guy machines, and
every last one of them behaved just like a human. Your reluctance to
entrust a single AI with power is a case in point. How much power an AI
has, and whether it's a single AI or a group, would not have the slightest
impact on the AI's trustworthiness. That only happens with humans who've
spent the last three million years evolving to win power struggles in
hunter-gatherer tribes.
As it happens, AI is much less risky than nanotechnology. I think I know
how to build a Friendly AI. I don't see *any* development scenario for
nanotechnology that doesn't end in a short war. Offensive technology has
overpowered defensive technology ever since the invention of the ICBM, and
nanotechnology (1) makes the imbalance even worse and (2) removes all the
stabilizing factors that have prevented nuclear war so far. If nanotech,
we're screwed. The probability of humanity's survival is the probability
that AI comes first times the probability that Friendly AI is possible
times the probability that Friendliness is successfully implemented by the
first group to create AI. If Friendly AI is impossible, then humanity is
screwed, period.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT