Re: greatest threats to survival (was: why believe the truth?)

From: Kevin Freels (megaquark@hotmail.com)
Date: Tue Jun 17 2003 - 12:17:14 MDT

  • Next message: cryofan@mylinuxisp.com: "Re: Discovery's "Walking with Cavemen""

    Just a thought..... It doesn't seem likely that massive breakthroughs are
    predictable 20 years in advance. In 1949, noone expected to be on the moon
    in 1969. In 1883, powered flight wasn't expected anytime soon. In 1976, the
    current access to information was unfathomable except to a few people
    speculating. There was no real expectation of it happening. Thus is the
    nature of discovery.

    ----- Original Message -----
    From: "Ramez Naam" <mez@apexnano.com>
    To: <extropians@extropy.org>
    Sent: Tuesday, June 17, 2003 12:13 PM
    Subject: RE: greatest threats to survival (was: why believe the truth?)

    > From: Brett Paatsch [mailto:paatschb@optusnet.com.au]
    > > 1) What is the probability of General AI in the next 20 years
    > > of *either* friendly or unfriendly variety?
    >
    > It seems rather low to me. There is no evidence of a fundamental
    > design breakthrough in the field of AI[*]. Without such a
    > breakthrough it seems unlikely that humans will be able to design an
    > artificial general intelligence (AGI).
    >
    > The alternative approach of uploading / simulating a human brain will
    > not be computationally viable in 20 years without a massive leap
    > beyond the Moore's Law projections for that time. There is no
    > evidence for this massive computational leap either.
    >
    > > 3) Can friendly AI be built that would be competitive with
    > > un-friendly AI or would the friendly AI be at the same sort
    > > of competitive/selective disadvantage as a lion that wastes
    > > time and sentiment (resources) making friends with zebras?
    >
    > This is a fine question. I find the reports of evolution's demise to
    > be greatly exaggerated. So long as the universe contains replicators
    > and competition for resources there will be evolution. And as you
    > point out, unfriendly AIs may be more competitive than friendly AIs.
    >
    > mez
    >
    > * - I have a great deal of respect for Eliezer, Ben Goertzel, and
    > others in this community who are working on AGI. However, until they
    > produce compelling experimental evidence, I'll put them in the
    > category of smart people working on a problem that has stumped every
    > other smart person who's worked on it.
    >



    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 12:23:22 MDT