Eliezer S. Yudkowsky wrote,
> ....... if
> the AI successfully comes up with reasonable and friendly answers for
> ambiguous and unspecified use-cases;
Who gets to decide if they're "friendly" answers?
Why limit it to one AI? Take a poll of the entire community of AIs?
> and .......if the AI goes through at least one
> unaided change of personal philosophy or cognitive architecture while
> preserving Friendliness - if it gets to the point where the AI is clearly more
> altruistic than the programmers *and* smarter about what constitutes altruism
> - then why not go for it?
Because that would make AIs diametrically opposed to normal human behavior?
--J. R.
"The man who does not vex anti-Singularitarians has no advantage over the man
who
can't vex them."
--Alligator Grundy
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT