Re: Will we be viped out by SI? WAS: RE: CULTURE: Chicago Tribune ...

From: J. R. Molloy (jr@shasta.com)
Date: Sun Jan 07 2001 - 10:47:36 MST


From: <Eugene.Leitl@lrz.uni-muenchen.de>
> > I see no reason why a SI should wipe out humanity.
>
> Strange, I don't see a single reason why it shouldn't. And a lot
> of reasons why it should.

How many is a lot? How about six?
OK, I'd settle for just three.

Let's see... psychosis would be one reason. But psychotics don't always
want to wipe out humanity, and even a psychotic SI could be contained via
producing multiple SIs, some of which could monitor the others. Or perhaps
the SI decides that it would be the intelligent thing to do... rather like
some people who'd like to wipe out the entire family of biting insects
such as mosquitoes. But during the time it takes to create a working SI,
more stringent tests than those given to humans could be employed to
ensure the psychonomic balance of the SI. Or maybe the SI decides that it
needs all the atoms incorporated in the bodies of humans. So it sets about
disassembling humanity. That's almost, but not quite as scary as the grey
goo scenario. Personally, I don't feel so antagonistic or prejudiced
against alien intelligence that I'd smear it with such intentions even
before it's born.

Human nature being what it is, I think it more likely that genetic
programming experimenters will try to build an SI if it's illegal than if
building one is actually encouraged. It seems to me the military finds
AI/SI very attractive, and I've heard that the US Navy has begun a Human
Brain Project which could readily lead to alien intelligence.

Why should SI *not* wipe out humanity? Because literally billions of human
hours will be spent in developing and testing the system, much more
attention than is given to any ordinary mortal. With so much attention
directed toward it by so many brains, its mental health would easily
outdistance that of any human (except for good ol' buddha). At this point
I'm reminded of the harm "2001, A Space Odyssey" has done to AI research.
What a preposterous notion that HAL would have a nervous breakdown rather
than the far less extensively tested crew members of the mission. Human
error remains the greatest cause of malfunctions in complex systems.
That's why we use these kinds of machines in the first place. They're more
reliable than humans.

> And why have I to repeat above arguments, all of time, year after year?

I understand your frustration very well (because you've expressed it very
clearly.)
<put a dumb smiley here>
Let me take this opportunity to offer help and collaboration in putting
together a paper on this subject that you can upload to the extropy web.
Why, we can do it online, right here in front of everyone. You can use me
as a sounding board. How about it? Let's beat this horse to death,
finally.

Cheers,

--J. R.

PS: I don't think we need to go as far as Jupiter brains and Diaspora and
so forth. Just a careful explanation of how to go about preventing the
human race creating Robo sapiens, Mind Children, Artilects, and Spiritual
Machines, and the reasons for doing so.
No hurry... we've got until next year, right?



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:17 MDT