Re: Questions about the Singularity

From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Sep 05 2001 - 03:31:18 MDT


On Tue, Sep 04, 2001 at 04:31:56PM +0200, Waldemar Ingdahl wrote:
> Hello everybody

> I' ve been pondering some questions about the concept of singularity
and its meaning in transhumanist philosophy.
  
> Certainly the term is a bit difficult to trace sometimes, since it
> comes from Teilhard de Chardins thoughts of the Omega- point (now that
> is a shaaaky fellow, but more on TDC some other time) and also from
> the tiplerite tradition, but I have seen also less "mystical"
> definitions of the term.

I think Teilhard never really proposed the singularity; his omega point
was simply the endpoint of history - a singular moment, but not linked
to any acceleration. Similarly, Frank Tipler's take on the omega point
is also the end of history (in this case literally, since it has to
occur at the last moment of the universe - there is an informal
"theorem" that says such infinite information states have to be at the
borders of the spacetime of the universe). Tipler does assume an
infinite acceleration of thought in his scenario, but it is very
different from the feedback process of improvement that Vernor Vinge
introduced.

> Also, not all of us think that a singularity is possible- but is it
> desireable as a philosophical concept to be present in the discourse?
>
> I ask my self: isn't the concept of a singularity the last grasp of
> Christian escathologism? That the concept of an unlimited development
> became a far too great leap, that you had to invent an end time? A
> time after which progress would be so immense that discussion of it
> would be pointless, often with the discussion falling down into an
> abstract paradise. But hasn't the introduction of escathology also
> introduced the concept of immanentism in transhumanism, with all its
> perils?

I usually complain that people discussing the singularity often end up
sounding like theologicians or worse, millennialist believers. There is
a very real danger of the singularity acting as sticky fly paper in the
discussion, making everybody waste their energy in debating nanomachines
on a pin rather than how to get there. But at the same time the
singularity is a part of the large perspective of transhumanism, trying
to look at human history on the biggest scale and asking what long-range
possibilities there are.

> Virtue is achieved through action, actions that may be very,
>very, very much more complex than ours- but there is still action
>required to achieve virtue.
> The Omega- point seen as inert, thus it is not virtuous. Thus the
> Omega point is not a state of Eudaimonia.

I think you mix up the singularity and the omega point quite a bit. Here
are the definitions from my terminology page:

        Omega Point: A possible future state when intelligence controls
        the Universe totally, and the amount of information processed
        and stored goes asymptotically towards infinity.

        Singularity: The postulated point or short period in our future
        when our self-guided evolutionary development accelerates
        enormously (powered by nanotechnology, neuroscience, AI, and
        perhaps uploading) so that nothing beyond that time can reliably
        be conceived.

Note that unless the singularity leads to some magical technology that
enables an omega point at once (and this is very unlikely for physical
reasons) the post-singularity state will not be omniscient.

The omega point is also a limit point rather than a state - properly
speaking the important thing is the growth of information towards
infinity. If you read Tipler's analysis of how to achieve this, you will
see that it involves tremendous action - the entire universe has to be
remade, and a literally infinite number of challenges at increasingly
higher energies have to be overcome. There is plenty of space for
eudaimonia in the approach to omega.

But you are right about the dangers of spending our time dreaming of
jupiter brains when there are real brains to use right here. Some far
range speculations have relevant consequences close at home - "Is
nanotech so dangerous it has to be banned?" , for example - and they can
and should be analysed with the best intellectual tools we have. But I
think more positive results can be achieved through applying the here
and now (and soon) knowledge, technology and ideas we have to create a
world where unbounded progress is possible.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:25 MDT