Re: AI safeguards [Was: Re: Humor: helping Eliezer to fulfill hisfull potential]

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Nov 06 2000 - 10:56:52 MST


I'm currently working on a semi-independent section of "Coding a Transhuman
AI" entitled "Friendly AI" which deals with these issues.

Spike, I realize that I used to be somewhat cavalier about the issue of
Friendly AI, mostly because I took objective morality as the default
assumption, and was still thinking about Friendly AI in morally valent terms.
I have since gotten over this, and believe that my thinking about Friendliness
has worked its way down to the level where everything can be phrased strictly
in terms of cause and effect. I'm spending as much time thinking about
Friendly AI as you could possibly wish for.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:19 MDT