Re: Singularity and AIs (was: We're stuck with each other)

From: Dan Clemmensen (dgc@cox.rr.com)
Date: Sat Jan 26 2002 - 18:21:50 MST


Robert J. Bradbury wrote:

> On Fri, 25 Jan 2002, Dan Clemmensen wrote:
>
>
>>Yes, you should be very careful with that prediction. it was arbitrary:
>>In 1996 I predicted that the singularity would happen within ten years.
>>
>
> Arbitrary! -- Arbitrary! Lord, I can't believe an esteemed extropian like
> Dan Clemmensen (whose google quotient may exceed my own) made an arbitrary
> prediction about when the singularity would occur. It such a shame that
> I'm halfway around the world from the "wailing wall".

I'm a singulatarian, but not a true extropian. The payback times for
most libertarian changes to human society are too long, given a
near-term singularity.

>>If you prefer the "phase change" model, then the situation is even
>>worse. In this model, we have built a substrate that provides the
>>resources needed to create an SI, needing only some single breakthrough
>>to bring it into existence. With this model, the SI can be brought into
>>existence without warning by an individual or a small group.
>>
>
> The only smooth path I could envision for the singularity would
> be an underground breakout that takes advantage of the WWW.
> You would have to be able to co-opt a significant amount of
> of underutilized resources to be able to manage an exponential
> growth path.

Correct. But Any programmer who is willing to break the rules can in
fact co-opt these resources. There is an active underground that
discovers and publishes techniques for this. Thanks to Microsoft
there are a legion of vulnerable computers. At a guess, there are
more than 100 million accessible, exploitable computers. The trick is
to discover the computers covertly in one pass, and then grab them and
start using them all at once, but still covertly. A well designed
replicating scanner would be essentially undetectable in the current
internet, which is filled with poorly-designed replicating scanners such
as "code red."

given this scheme, one could run 100 million instances of a massively
parallel program designed to minimize required bandwidth.

As you are probably aware, my guess is that AGI (Artificial General
Intelligence) will not be created by normal human intelligence. Instead
Normal human programmers will create software that permits computers to
augment human intelligence via human-computer collaboration. The
resulting augmented humans will first enhance themselves and then
program AGI.

 



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:36 MST