On 10/2/2000, - samantha wrote:
> > From a sufficiently removed
> > perspective, replacing the human race with an intelligence vastly more
> > aware, more perceptive, more intelligent and more conscious may not be
> > entirely evil. I don't say it should happen, but it is something to
> > consider in evaluating the morality of this outcome.
>
>You can only evaluate a morality within a framework allowing valuing.
>What framework allows you to step completely outside of humanity and
>value this as a non-evil possibility?
I thought Hal did a good job of describing such a framework. Actually,
any framework which values things in terms other than whether they have
human DNA allows for the possibility of preferring AIs to humans. The
only way to avoid this issue is to stack the deck and declare that only
humans count.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT