RE: Why will we reach the singularity?

From: Ramez Naam (mez@apexnano.com)
Date: Sat Mar 01 2003 - 21:35:58 MST

  • Next message: Terry W. Colvin: "FWD [forteana] Re: Castaneda and skeptics [new subj]"

    From: Joao Magalhaes [mailto:joao.magalhaes@fundp.ac.be]
    > I've been wondering on why are transhumanists so confident
    > that we will reach the singularity.

    Joao, this is a great question. I wonder the same thing myself on a
    regular basis.

    Personally I don't like the term "singularity". It's rather
    apocalyptic and not particularly descriptive.

    That having been said, it seems to me that there are three things that
    many extropians would accept as signs of the so-called singularity.

    1) Self-replicating molecular nanotechnology

    2) Human uploads

    3) Non-upload AI with greater than human general intelligence

    No one has shown me compelling evidence that any of these will be
    developed in the next 50 years. Of all of these I think uploads are
    the most likely to occur in that time, at around 2050.

    > In the end, I would say that the basis for the singularity is
    > Moore's law.

    More generally - it's the accelerating ability of mankind to
    manipulate information, and the potentially recursive nature of this
    ability.

    > Yet I'm sure there are physical limits for Moore's law.
    > When will we reach them? Can you be sure Moore's law will
    > continue for long enough to develop a smarter-than-man
    > artificial intelligence?

    I think you're mixing a few questions here.

    1) What's the physical limit of computational density?

    2) With computers at that limit can we develop AIs or uploads?

    3) At what rate will we push towards that limit?

    My answers:

    1) FORESEEABLE COMPUTING POWER - The combination of shrinking circuits
    to what we currently consider the physical limits and then moving to a
    3 dimensional design for processors seems to give us around a factor
    of 10^10 more computing power than today, or around 10^19 ops / second
    in a PC, or 10^22 ops / second in a supercomputer. I'm being pretty
    casual here and just assuming that we can move feature sizes down to
    .01 microns and build independent circuit layers of that size and
    stack them on top of each other. Other techniques may in fact take us
    far past this level.

    2) COMPUTING POWER NECESSARY FOR AI / UPLOADS - I'm a bit of a
    pessimist on simulating the human brain compared to many posters on
    this list. My own estimate is that it'll take around 10^20 ops /
    second. That's basically 10^14 synapses, up to 100hz signals, 100x
    oversampling, and 100 instructions to simulate each timeslice at each
    synapse. This is close enough to my naïve projection of feasible
    computing power that I conclude that we will indeed be able to run a
    human or human-equivalent mind on such a device.

    3) RATE OF COMPUTING POWER GROWTH - At the current rate of growth,
    we'll reach 10^20 ops / second in a PC-analogous device around 2050.
    But will the current rate of growth hold? That's the gazillion dollar
    question. The problem with Moore's Law is that it's a purely
    empirical observation. There's no underlying theory for *why*
    computing power increases at that rate. Kurzweil argues that since
    the rate of computing power has increased at the same rate for 5
    successive computing technologies, that it should continue at the same
    rate into future technologies that we'll need to use to achieve that
    10^20 or so ops / second. That's very comforting, but it's not a
    sound argument. History is full of examples of technologies that
    followed an S-curve - progressing exponentially for some time and then
    slowing or even tapering off.

    No one has shown me why Moore's Law must continue at the same rate.
    Kurzweil believes that Moore's Law is accelerating, but I haven't seen
    any evidence to support this position either. I have every confidence
    that mankind will reach those 10^20 ops / second individual devices at
    some point, but exactly when that will occur seems to be a topic more
    for speculation than for rigorous argument today.

    > When I found transhumanism, already several years ago, I
    > thought it set an optimistic but plausible scenario. Now,
    > I'm starting to wonder if we're not just another cult
    > willing to sacrifice reality towards a fairer image of
    > the world. Please prove me wrong.

    I think there's certainly this tendency to *want* to see one's own
    favorite utopia in the future.

    Personally, I think the best kind of transhumanist activity is the
    kind that helps *create* the future we want to see. There are any
    number of routes to do that. You, Anders, and others on this list are
    engaged in direct research aligned with transhumanist goals. Robert
    and I have both spent time, energy, and money trying to bring this
    kind of research into the commercial realm. Damien, the betterhumans
    folks, and others have spent their time educating the public. All of
    these activities increase the extropy of the world. So, rather than
    worry too much about things that we *can't* affect, I suggest we all
    remain focused on the individual contributions that we *can* make to
    global extropy.

    cheers,
    mez



    This archive was generated by hypermail 2.1.5 : Sat Mar 01 2003 - 21:40:28 MST