Re: Moravec article available

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue May 02 2000 - 21:47:31 MDT


On Tue, 2 May 2000, Zero Powers wrote:

> Plus, when you add such things as drastically improved human-machine
> interfaces, neural implants and/or modifications, mind-uploading, telepathic
> technology, ubiquitous, wireless "always on" ultra-high bandwidth
> human-machine and human-human network connections, there seems little doubt
> that, if you find yourself still living 100 years from now chances are that
> the "you" of 100 years from now will be so different from the "you" of
> today, that you will agree that the "you" of today is effectively "dead".
> However, I don't think that will be cause for mourning. Instead it will be
> cause for celebration.
>

Zero raises an interesting point, not too different from my own
perspective. I have no resistance to self-evolving and little
resistance to long periods of "down-time" (cryonic suspension,
computer suspend mode, etc.) so long as those paths are due to
my personal choices (or perhaps logical choices dictated by an
awareness of the "current" economic environment), they may be
acceptable.

The Hansonian "singularity" scenario argues that others might
risk more and out-compete you so they can potentially choose to make
choices for or even erase you. The Morovecian perspective seems to
argue that simple machines will auto-complexify to the extent that
they make "humans" essentially irrelevant. For example, machines
become "self-aware" and discover a world populated by untrustable
humans. Since "untrustability" is a violation of the machine
ethical code, all humans must either be *made* trustable
or destroyed.

If this path is followed (only alive & trustable allowed), we should
all be very wary.

Robert



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:10:24 MDT