Re: greatest threats to survival (was: why believe the truth?)

From: Brett Paatsch (paatschb@optusnet.com.au)
Date: Tue Jun 17 2003 - 09:49:00 MDT

  • Next message: Robin Hanson: "RE: Why believe the truth?"

    Rafal Smigrodzki wrote:

    > Brett wrote:
    > >
    > > PS: If I *knew* what the greatest threat to survival to (say 150)
    > > was for the average healthy 36 year old Australian male was that
    > > might focus my energies wonderfully.
    >
    > ### UFAI.
    >
    > I think it could happen within the next 20 to 40 years, with a higher
    > probability than the sum total of the prosaic causes of death killing
    > you over the same time period.

    Un-friendly AI ? That *is* interesting.

    Given that a 36 year old Australian male (not thinking of anyone in
    particular :-) would be 56 to 76 in the timeframe you nominate and
    76 is probably slightly over the average lifespan expected on the
    tables bandied about at present, you really think unfriendly AI is
    that big a risk?.

    Not being an AI enthusiast of the pedigree of certain others
    on this list I wonder:

    1) What is the probability of General AI in the next 20 years
    of *either* friendly or unfriendly variety? (I'm thinking about the
    massive parallelism of brains and that maybe a subjective is
    a necessary pre-requisite for "I" and might be not so trivial to
    engineer.)

    2) How would this probability be figured? What assumptions
    are required? (I am an open-minded AI sceptic. But then I
    am an "I" sceptic too so that's not saying a great deal.)

    3) Can friendly AI be built that would be competitive with
    un-friendly AI or would the friendly AI be at the same sort
    of competitive/selective disadvantage as a lion that wastes
    time and sentiment (resources) making friends with zebras?

    Regards,
    Brett Paatsch

      

     



    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 09:57:29 MDT