Re: Artificial Reality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 07 2001 - 12:11:06 MDT


Christopher McKinstry wrote:
>
> Deeper in the future, things start to get funny when you factor out the
> basic biological limitations of people, such as life span and memory
> capacity that are not limitations for machines. Eventually we will enter
> into resource conflicts with immortal machines. We will lose and
> rightfully so. Evolution will say "Next" and that will be that.

Chris,

May I suggest that you take a look at the SIAI Guidelines on Friendly AI?
  http://singinst.org/friendly/

I am not suggesting that GAC has any remote need of such a thing - it was
J.R. Molloy's use of the words "GAC" and "Friendly" in the same sentence
that started off this whole debate in the first place - but there is an
ongoing public debate about technology and humanity's future, and the
long-term prospect of benevolent AI is a part of it. I think our chances
are somewhat better than they appear in your statement above; or, rather,
our chances could be better if we play our cards right.

Friendly AI is not intended as a means of keeping "machines" "subservient"
or "under control". I agree that it is desirable that we eventually move
beyond humanity and that transhuman AI will play a part in that. However,
I would offer for your consideration three additional propositions.
  First, that it is morally desirable that existing individual humans have
the chance to participate in the future.
  Second, that it is morally desirable that individuals who just want to
stay human should be able to do so without worrying about "resource
conflicts".
  Third, that regardless of whether all AIs ever constructed are Friendly,
or even whether it is philosophically desirable that all AIs be Friendly,
there is still a good chance of achieving goals one and two if the first
AIs constructed are Friendly.

Personally, I would say that propositions one and two are not the
consequences of any inherent moral superiority of orga over mecha, but
rather result because humans are sentient beings and sentient beings have
rights. In other words, it's not a mechanophobic thing. If sentient AIs
capable of experiencing pleasure and pain currently existed, I would want
to preserve their rights in the future as well.

Regardless of whether you agree, I would ask that if you in the future
happen to discuss the possibility of evolutionary, war-to-the-death
competition between humans and AIs, you also at least mention the
possibility of Friendly AI, even if it consists of the phrase "There have
been proposals for Friendly AI, but I think they're unworkable."

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:42 MDT