Re: AI safeguards [Was: Re: Humor: helping Eliezer to fulfill hisfull potential]

From: Spike Jones (spike66@ibm.net)
Date: Mon Nov 06 2000 - 21:12:17 MST


Eliezer, you are a gentleman and a scholar. I am honored to
have made your acquaintance.

Nowthen, I see many parallels between current AI research
and the researchers into nuclear fission in the 1930s. The
task of AI is much more difficult, but in both cases, the dangers
and rewards are great. Altho inherently dangerous, I believe
it *is* possible and logical to use the term "safety" in the same
sentence with AI, just as much as it was in the 30s with nukes.

Carry on, yall! spike

>Spike, I realize that I used to be somewhat cavalier about the
>issue of Friendly AI...

p.s. Eliezer, no apologies necessary. You, like the rest of us,
are entitled to a youthful indiscretion or two, so long as they
are *safely* in the distant past. {8^D

"Eliezer S. Yudkowsky" wrote:

> I'm currently working on a semi-independent section of "Coding a Transhuman
> AI" entitled "Friendly AI" which deals with these issues.
>
> Spike, I realize that I used to be somewhat cavalier about the issue of
> Friendly AI, mostly because I took objective morality as the default
> assumption, and was still thinking about Friendly AI in morally valent terms....



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:20 MDT