Re: Keeping AI at bay (was: How to help create a singularity)

From: R. (coyyote@hotmail.com)
Date: Mon Apr 30 2001 - 15:15:34 MDT


I'm sorry I dont understand what "You people" means

I am not a luddite if thats what you mean
I mean dangerous in the error prone sense, not a generalization of AI per
say.

----- Original Message -----
From: <Eugene.Leitl@lrz.uni-muenchen.de>
To: <extropians@extropy.org>
Sent: Monday, April 30, 2001 3:31 PM
Subject: Re: Keeping AI at bay (was: How to help create a singularity)

> Robert Coyote wrote:
> >
> > I belive many of the assertions on this thread are dangerously
> > anthropomorphic projections, it may be that the answer to the question
"What
> > does AI want?" is incomprehensible.
>
> I don't have to anthromorphize an <insert Something Really Bad here,
> such as a 100 km space rock impacting the top of your skull/two
> vagrant neutron star deciding on fusing Somewhere Under Your Bed>
> to infer it's going to make me very dead, whether with our without
> lots of spectacular pyrotechnics.
>
> What is it with you people, are you just generically unreasonable,
> or do you habitually let your kids play russian roulette; or
> soccer on a minefield?
>
> I do not care what the AIs want, as long as they're something
> really big, fast, powerful, and growing exponentially, and having
> -- even unbiased -- agendas of their own. And they will be that, very
> soon, if they're subject to evolutionary constraints. Which seems
> about the only bit of prejudice I have.
>
> I don't even have to be absolutely 100.1% positive about it, as
> long as I think there's a nonnegligable probability of Ragnar0k,
> I just think it's a monumentally bad idea to attempt. Billie Joy
> is right on the money on this one. Relinquishment, shmerlinquishment,
> just hold the horses until we're ready.
>
> Is it too much too ask?
>



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT