RE: greatest threats to survival (was: why believe the truth?)

From: Harvey Newstrom (mail@HarveyNewstrom.com)
Date: Tue Jun 17 2003 - 16:19:03 MDT

  • Next message: Natasha Vita-More: "Re: TV: Discovery's "Walking with Cavemen""

    Rafal Smigrodzki wrote,
    > ### For all the decades of unmet expectations, AI relied on
    > computing power of the order of an ant, and only recently, as
    > Moravec writes, did they graduate to the computing power of a
    > mouse. Since AI on ant-powered computers gave ant-powered
    > results, and AI on mouse-powered computers gives mouse-powered
    > capacities (such as target tracking, simple learning, simple
    > motor control), we may expect that AI on human-level computers
    > will give human-level results. Human-level computing power is
    > going to be available to SingInst in about 15 years, so we can
    > expect the recursive self-enhancement of the FAI to take off
    > around that time.

    This makes a lot of sense at first glance. But is it right? Is the limiting factor of current AI really that the computers are underpowered? If that were the case, I would expect AI to work fine, but be really slow. Then increasing computer power would get the AI up to speed. But as I understand it, the problem is not speed or power. The problem is that AI programs still don't make the right decisions much of the time. Faster computers won't help them work better. They will merely perform the current failures faster. I think improvements for AI have nothing to do with hardware. I think they will be software and methodology based.

    -- 
    Harvey Newstrom, CISM, CISSP, IAM, IBMCP, GSEC
    Certified InfoSec Manager, Certified IS Security Pro, NSA-certified
    InfoSec Assessor, IBM-certified Security Consultant, SANS-cert GSEC
    <HarveyNewstrom.com> <Newstaff.com>
    


    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 16:30:09 MDT