From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Aug 12 2003 - 13:08:46 MDT
On Tue, 12 Aug 2003, nanowave wrote:
> Yes, the possibility of unwittingly unleashing a kind of "berserker" upon
> the earth would seem to be not statistically insignificant, [snip]
Like hell, if it is intelligent enough to accumulate resources and transport
itself it is *REALLY* dangerous (and that does *not* require much intelligence).
That is why my hat is off to Eliezer for pointing out the problem of an
unfriendly AI. (Take SARS and multiply it by many orders of magnitude and
one gets the hazard level of an unfriendly AI).
But one doesn't have to deal with only unfriendly AIs. It appears that unfriendly
viruses have this capability (look at influenza epidemics).
> Is it completely unreasonable to presume that 'human-level' intelligence
> implies human-level ethics and values (two kinds of social intelligence)
> as well as pure knowledge crunching/combinatorial power?
Perhaps. Our ethics/values have been dictated by survival requirements
(better to have friends than enemies). A novel AI need not have such
restrictions. (Just program a berserker -- screw the consequences.)
> A tendency toward malevolence and unnecessary destruction are still
> considered entropic and SUB-human in these parts are they not?
I would say that "malevolence and unnecessary destruction" would fall
into the category of "entropicness". The more "organized" resources
one destroys the fewer resources one has to advance extropicness.
One could look at a simple scenario of what the resources cost that the
U.S. devoted to the reconstruction of Japan or Germany after WWII
that might have been better devoted to scientific (or other types)
of research.
Robert
This archive was generated by hypermail 2.1.5 : Tue Aug 12 2003 - 13:17:31 MDT