Dale Johnstone wrote:
> From: "Jim Fehlinger" <email@example.com>
> > Also, effects that some folks on this list can contemplate with
> > equanimity
> > are events that would horrify many people outside of extreme
> > sci-fi/technophilic
> > circles. For example, Eliezer Yudkowsky has said on many occasions
> > that,
> > as long as the human race survives long enough to give birth to some
> > sort
> > of superintelligence, the ultimate fate of humanity is of no consequence
> > (to him or, presumably, in the ultimate scheme of things). I suspect
> > that this
> > attitude is part of what gives folks like Bill Joy the willies.
> Please be more careful when quoting people. I'm sure the context of the word
> 'humanity' indicated 'human-ness'. I really don't think Eliezer means he
> doesn't care about the people living now, far from it.
Thank you, Dale! You phrased that perfectly - better than I did, in fact.
I've changed positions quite a bit over the years - for example, moving my
"default assumption" from objective morality to Friendly AI. Nonetheless,
these points have remained constant since the first day I heard the word
1) One of the primary moral drivers behind the Singularity is to stop the
150,000 lives/day planetary death rate;
2) There is no conflict of interest between humanity and the Singularity;
3) If any parts of human-ness turn out to be a bad idea, we should ditch
them and be done with it.
(Some positions that have altered include:
1) Whether individual humans will be able to "opt out" of the Singularity
- moved from NO to YES.
2) Whether additional effort could be necessary to create a friendly
superintelligence - moved from NO to YES.
3) When the Singularity will occur - moved from 2025 to 2008.)
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:20 MDT