Jon Reeves wrote:
>
> I've been reading this thread with interest (sorry - lurking again), and I
> think the question that is more to the point is "Why would AI want to be
> _unfriendly_?"
>
> The extermination (or enslavement) of several billion people would surely
> require an expenditure of a considerable amount of time and energy - even
> for an SI. What motivation could it have for doing so ?
None. The possibility being discussed is that the SI will perceive no
particular difference between humans and other pieces of matter and will use
us for spare parts.
> It seems to me that most sapients consider diversity to be a very important
> thing - why would a A/SI not think the same.
Now *you're* anthropomorphizing. What do you mean, most sentients? All you
can possibly mean is "most humans". And at that, your statement is completely
false with respect to most humans, if not a majority.
Again. Diversity is a supergoal, albeit a badly-phrased one. Why will an
arbitrary piece of source code start valuing diversity? If you say "because
diversity makes the Universe a shinier place", please list the supergoals
being referred to ("making the Universe a shinier place") and kindly explain
how THOSE goals got into an arbitrary piece of source code.
With respect to the supergoal "Be friendly to humans" - obviously, this is
nothing like the actual cognitive content, but we'll stagger on - there is a
very simple reason why this supergoal would appear in the AI's cognitive
contents: Because the programming team put it there.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:31 MDT