Re: Hawking on AI dominance

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 10 2001 - 01:38:59 MDT


Mike Lorrey wrote:
>
> Until presented with any alternatives, we have to act from this default
> state, and trying to start from scratch is more likely to result in
> failure to achieve AI than not. Therefore, any AI we develop will likely
> act and behave mighty human-like, with at least *some* human values. As
> an intelligence with higher abilities to self-modify, whether it retains
> those values will obviously depend upon whether human values are as
> objective as some think, as well as whether we actually hard-wire them
> in to build "A Friendly AI".

I'm sorry if I'm being repetitive about this, but Friendly AI is not about
enslavement. Enslavement has very little chance of working, and I think
most people instinctively know that, so it really is a strawman argument
to describe Friendly AI the same way. The best way I know of to describe
Friendly AI is to borrow some terminology from Gordon Worley on the SL4
mailing list, and say that a Friendly AI is operates within "the human
frame of reference".

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:27 MDT