RE: greatest threats to survival (was: why believe the truth?)

From: Ramez Naam (mez@apexnano.com)
Date: Tue Jun 17 2003 - 18:55:47 MDT

  • Next message: Eliezer S. Yudkowsky: "Re: Why believe the truth?"

    From: Rafal Smigrodzki [mailto:rafal@smigrodzki.org]
    > ### For all the decades of unmet expectations, AI relied on
    > computing power of the order of an ant, and only recently, as
    > Moravec writes, did they graduate to the computing power of a
    > mouse. Since AI on ant-powered computers gave ant-powered
    > results, and AI on mouse-powered computers gives
    > mouse-powered capacities (such as target tracking, simple
    > learning, simple motor control), we may expect that AI on
    > human-level computers will give human-level results.

    I like this answer but also Harvey's rebuttal of it.

    Most AI researchers that I know of are not overly concerned with
    processing power. They think the fundamental problems of AI are
    design problems.

    But as you point out, roboticists do care about computing power.

    I find this an interesting dichotomy, actually. Roboticists, computer
    vision researchers, and the like deal in interacting with the messy
    physical world.

    More traditional AI branches that concern themselves with more
    abstract reasoning problems like knowledge representation or mimicking
    human experts deal with relatively clean and sparse sets of data.
    Even CYC's eventual knowledge database is still small compared to the
    relative flood that a computer vision system must deal with.

    This suggests that you're right in that additional processing power
    will enable the creation of software and robots with higher levels of
    sensory and locomotor intelligence.

    However it's not clear to me how we'll bridge the gap between that
    kind of intelligence and the abstract reasoning that we humans have.

    Right now if I had infinite computing power and wanted to do it I'd
    employ an evolutionary technique. Unfortunately, this is Eliezer's
    worst nightmare - a new form of intelligence grown by evolution and
    not necessarily friendly to us.

    In any case, your post has mostly convinced me that increased
    computing power in and of itself may lead us to this this kind of
    messy, physically-rooted intelligence. Just not the clean, abstract,
    designed-from-the-top intelligence that people usually associate with
    AI.
     
    > Human-level computing power is going to be available to
    > SingInst in about 15 years, so we can expect the recursive
    > self-enhancement of the FAI to take off around that time.

    I'm not convinced of this. You're basing this on Moravec's
    extrapolation of computing power necessary to replace the retina to
    the whole brain? I think that's a pretty rough model. The retina
    lacks much of the complexity of the cortex.



    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 19:05:45 MDT