Re: The non-existence of posthumans [was: Re: Heston Speech]

From: Charlie Stross (
Date: Mon Feb 26 2001 - 07:21:29 MST

On Sun, Feb 25, 2001 at 11:39:37AM -0800, Max More wrote:
> At 10:09 AM 2/25/01, Steve Nichols wrote:
 [ snip ]
> > >By definition, posthumans will not exist until the singularity occurs,
> I don't agree with Mike on this one. I'm something of a Singularity
> skeptic, though it does depend on how the term Singularity is defined. I
> expect a powerful swell, rather than a sudden spike. At some stage in the
> swell of accelerating change we may legitimately claim to be posthuman

If you don't mind, I'm going to repost a usenet posting I made yesterday(!)
here. (Originally on rec.arts.sf.written, in a thread where Steve Stirling
had, for reasons best known to himself, decided to cock a snook at St Vinge
of the Singularity.)

Anyone got any bones to pick with the definition I give below?

-- Charlie

Newsgroups: rec.arts.sf.written
Subject: Re: REVIEW: Vernor Vinge's "Across Realtime"

Stoned koala bears drooled eucalyptus spittle in awe
as <> declared:

>Actually, Vinge's concept of the "singularity" is a thinly secularized
>religious concept of familiar hue.
>Sort of like "the Rapture" with pseudoscience tacked on; all the curves
>continue to rise exponentially, instead of turning over into an "S" the way
>they do in the real world (tm).

True and false in the same sentence.

On the one hand, yes: lots of people who think the singularity is imminent
fall into the same pitfall as the fundies with their rapture. And yes, the
development of new technologies tends to follow a sigmoid curve that caps
off after a while.

On the other hand, what we're seeing is *more and more* new technology
curves starting up, based on existing ones. For example, in the nineteenth
century the only really obvious ones in action were steam power,
telegraphy, and ironworks. These days, we probably see that many new,
fundamentally important technologies, starting up every week. As David
Brin commented a couple of years ago, there are now more scientists alive
and working full- time than in the whole of human history up to about
ten or twenty years ago. We've got an order of magnitude more people
than they had in 1900 -- and they're overall better educated. This means
*more* than an order of magnitude more scientists and engineers working,
so it would be astonishing if they weren't coming up with new discoveries
and technologies.

The real point of the singularity argument, though, is this: posit a
single assumption -- that intelligence is a computational process (or can
be emulated by computational processes). This assumption falls out of an
explicit rejection of cartesian dualism (for which neuroscience has found
no supporting evidence). It's not a cut-and-dried issue yet, but opponents
of computational intelligence have to jump through quite a few hoops if
they want to prove that it is impossible (cf. Roger Penrose, Ronald Searle).

*IF* we posit the possibility of a computational intelligence that is human-
equivalent in its ability to cogitate, and *IF* we develop such a tool,
*THEN* we can be certain that said tool can also invent a CI. We can also
be certain that by throwing more (or vastly more) processing resources at
it, we can enhance its speed. Thus, we get what Vinge describes in his paper
(see as "weak superhumanity"
(or "fast thinking"). At this point, the rate of change goes through the
roof -- human thought processes are slow in comparison.

This still doesn't give us the singularity (a la Rapture) that many
people believe in; it just gives us very rapid incremental improvements
to everything we can see around us, such that a lot of hard problems get
solved. (For example, if there's a non-obvious shortcut to building an
economically useful cheap fusion reactor, expect it to be invented at
this point.)

The big leap of faith comes if we posit the possibility of higher-order
types of intelligence existing. Just what these are, or would act like,
isn't really clear. (However, Hans Moravec takes a stab at them (see: If it's possible
for a human-type intelligence to build a computational higher-order
intelligence, then this is more likely to be carried out by a weakly
superhuman AI trying to augment itself than by human scientists. And this
really does represent a singularity, because beyond this point we can no
more understand the intelligences modifying the universe we live in than
the cat sitting on my mouse-mat and purring right now can understand this

Anyway, this gives us a chain of possibilities, linked by assumptions
about the world that are testable:

* Do combinations of technologies facilitate the serendipitous discovery
  of yet more new technologies? (If so, each technology might follow a
  sigmoidal curve of development, but the overall curve you get when you
  superimpose them all will be exponential)

* Is consciousness computationally tractable? (If you believe in immaterial
  spirits and ghosts, the answer is clearly "no"; if, on the other hand,
  you believe that our consciousness is an emergent property of brains,
  which are just big wet endocrine glands with horribly complicated
  internal connections, the answer is clearly "yes" -- although the
  *degree* of tractability can vary; estimates of the computational
  bandwidth of a brain run in the range 10^16 to 10^24 MIPS, but could go
  drastically higher if quantum processes are involved, as Penrose asserts).
  (See Moravec's paper at;
  personally I think he's over-optimistic.)

* This begs the question of mind uploading (is human consciousness
  something that can be transferred to a computing device -- see However,
  simulated neurons in a well-known biological system
  have been successfully coupled to a real system; see for a description of the research.
  (The paper is -- "Interacting
  biological and electronic neurons generate realistic oscillatory
  rhythms", pub. Neuroreport on February 28, 2000). This appears to
  be the first actual case of uploading in practice -- admittedly, of
  a couple of neurons in the stomatogastric ganglion of a spiny lobster.
  for a tool which, in fifty years' time, might be able to do it in real
  time to a human brain.

* If consciousness is computational, can it be optimized for higher
  performance? (Almost certainly "yes" for linear speed improvements --
  very probably "yes" for algorithmic improvements *if* mimicking
  human consciousness is less of a priority than getting thinking done

* Are higher orders of intelligence possible? (Don't know -- we have no
  evidence. However, see
  for some interesting speculations on the implications of causality
  violation for computation.)

If you stack enough "S" curves on top of each other, you get something
that looks awfully like an exponential ...

-- Charlie

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:48 MDT