**
--- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
> John Marlow wrote:
> > > > It might push us to the
> > limits of the universe and help us create an
> > everlasting utopia--or exterminate us all tomorrow
> for
> > a reason we couldn't even guess BECAUSE it's
> nothing
> > like a human.
>
> Yes, that is the Great Coinflip. Again, my point is
> that this is an
> ineradicable coinflip unless you really think that
> humanity can spend the
> next billion years at exactly this intelligence
> level. If not, sooner or
> later we need to confront transhumanity. Every
> extra year is another year
> we have the opportunity to exterminate ourselves, so
> sooner is better.
> Can't make it to utopia if the goo eats through an
> artery.
>
**I'm all for confronting transhumanity and moving
past it. I just think (hunter-gather-human that I am)
that we'd be best served by something
essentially...well, human. HOLD ON--I get it I get it,
okay? What I mean is something that is MORE than
human, but retains the essence, if not the form. The
fundamental drive toward improvement is, after all, a
human quality. The desire to comprehend, to know, to
learn. And--yes--emotion.
**I think we'd be better served by vaulting ourSELVES
across that threshhold than by pushing something that
never WAS human across it. Though of course you see
advantages in something that was never human. I see
disadvantages.
> > It is therefore a constant threat whose
> > level at any given instant is not unly unknown but
> > unknowable. And, being alien--it may well view us
> the
> > same way.
>
> Why are you attributing anthropomorphic xenophobia
> to an alien you just
> got through describing as unknowable?
**I'm not; I merely point to a possibility with dire
consequences--as you do when you cite our imminent
self-destruction.
> > **What, in your estimation, is the probability
> that
> > military interests will seize the operation/the AI
> at
> > the critical time? That they have/will infiltrate
> > AI-creation efforts for this and monitoring
> purposes?
> > Remember--they're thinking like hunter-gatherers,
> > too...
>
> This requires a military mind sufficiently
> intelligent to get why AI is
> important and sufficiently stupid to not get why
> Friendliness is
> important.
**Nooo problem; plenty of those guys around. The
planet is lousy with them.
> > **Best info sources on this issue and on your take
> of
> > this issue? On AI/SI?
>
> I thought you'd never ask. I'd recommend:
**No need to wait; always open to new things.
>
> http://singinst.org/intro.html
>
http://sysopmind.com/sing/PtS/navigation/deadlines.html
> http://singinst.org/CaTAI.html
**Thanks.
>
> You can find some non-Yudkowskian material
**Oh boy; can't wait!
;)
at:
>
>
http://dmoz.org/Society/Philosophy/Current_Movements/Transhumanism/
john marlow
>
> -- -- -- --
> --
> Eliezer S. Yudkowsky
> http://singinst.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence
__________________________________________________
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT