Samantha wrote:
> >>>From a sufficiently removed perspective, replacing the human race
> >>>with an intelligence vastly more aware ... may not be entirely evil.
> >>
> >>What framework allows you to step completely outside of humanity and
> >>value this as a non-evil possibility?
> >
> >I thought Hal did a good job of describing such a framework. Actually,
> >any framework which values things in terms other than whether they have
> >human DNA allows for the possibility of preferring AIs to humans.
>
>... Why should humans prefer AIs to humans ... What benefit is their
>for human beings (who/what we are) in this? ... Since we are human
>beings human beings count quite centrally in our deliberations and must.
>Do you agree?
No, not if we have a choice. If we have no choice about what we consider
evil, because it is hard-wired into us, then there is no point in having
this discussion about what we should treat as evil. If we do have a
choice, however, then it remains an open question what should count.
What benefit is there to us to have children? What benefit is there to us
to help citizens of distant foreign countries? What benefit is there to
us of helping animals? If we like and respect and care about such creatures,
then there's a benefit to us from helping them. Similarly, if we like and
respect and care about AIs, then there can be benefit to helping them.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT