Re: Eugene's nuclear threat

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 29 2000 - 16:24:03 MDT


Robin Hanson wrote:
>
> Eliezer S. Yudkowsky wrote:
> >
> >And scientists and engineers are, by and large, benevolent - they may
> >express that benevolence in unsafe ways, but I'm willing to trust to good
> >intentions. After all, it's not like I have a choice.
>
> A week earlier, Eliezer S. Yudkowsky wrote:
> >
> >On the contrary; the Earthweb is only as friendly as it is smart. I really
> >don't see why the Earthweb would be benevolent. Benevolent some of the time,
> >yes, but all of the time?
>
> These two quotes seem worth comparing.

Okay, let's. Scientists and engineers are benevolent most of the time, and so
is the Earthweb. I might trust a *mature* Earthweb to make *one* decision
correctly, just as I would trust a *successful* AI engineer to program *one*
Friendly AI. If we had to trust a thousand teams to create a thousand AIs,
we'd be doomed. And I certainly wouldn't trust a randomly selected AI
researcher to competently program a Friendly AI, just as I wouldn't trust a
randomly selected AI researcher to successfully build an AI at all - this
being the analogy to trusting a young Earthweb.

> Eliezer distrusts the scenario of
> EarthWeb becoming smarter and smarter, with more and more input from
> advanced software as available, because people can be emotional. But he
> thinks we should trust researchers creating a big bang AI, because
> scientists are benevolent.

I don't think we should trust them because they're benevolent; I said we could
probably trust them to *be* benevolent. If we build a scenario that relies on
the winning researchers being benevolent, it's not a scenario-killing
improbability - most people *are* benevolent. *Competence* is another issue
entirely.

> This seems to me the Spock theory of who to
> trust - let scientists run things because they aren't emotional, and don't
> let markets run things, because they might let just any emotional
> person have influence.

Please stop deliberately distorting my words. I don't trust scientists to run
*anything*, any more than I trust democracy or an Earthweb. Building a
Friendly AI isn't "running" anything. Internally, it's an AI project, not a
social interaction of any kind.

> Of course if people realized that scientists knew better and should be
> trusted, then non-scientists wouldn't bet against scientists in a market,
> and scientists would effectively run an EarthWeb.

The scenario you describe has an unfortunately low probability of coming to
pass. (My mother once recommended this to me as a more polite alternative to
"You're wrong.")

> But since people are
> too stupid to know who their rightful betters [bettors? -- Eliezer] are,
> Eliezer prefers to impose on them the choice they wouldn't choose for
> themselves.

Robin, you - like myself - have *got* to learn to resist that parting shot.
Did that really add anything to the discussion?

I would no more trust scientists to run the planet than I would trust a group,
or an Earthweb. And may I note that I would expect benevolent intentions from
the person on the street almost as often as I would expect it from AI
scientists - say, 70% vs. 80%. When I said that I believed the intentions of
researchers were good, I didn't mean that the rest of humanity was bad.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:27 MDT