Re: Why would AI want to be friendly?

From: Michael S. Lorrey (retroman@turbont.net)
Date: Thu Sep 28 2000 - 11:39:11 MDT


Emlyn wrote:
>
> > Your reasoning is based on a slow, soft Singularity, where both the
> > machines and humans converge, eventually resulting in an amalgamation,
> > advancing slowly enough so that virtually everybody can follow. While
> > it may happen that way, I don't think it to be likely. I would like to
> > hear some convincing arguments as to why you think I'm mistaken.
> >
>
> A singularity which is "slow"; can someone explain this to me?

A fast singularity posits that as soon as a smarter-than-human AI or mind upload
occurs, then it will be a matter of days, weeks, or months before they and their
associates go asymtotic.

A slow singularity posits that long before this occurs, there will be a long
gradual phase of human augmentation technology development, where people add
more and more capabilities to their own minds, to some eventual point where
their original bodies may die and they do not even notice, as the
wetware/meatware part of their being has become such a small percentage of their
actual selves. I personally am betting on this occuring, and not the punctuated
equilibrium that others seem to think will occur with a fast singularity.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:18 MDT