RE: greatest threats to survival (was: why believe the truth?)

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Wed Jun 18 2003 - 20:38:38 MDT

  • Next message: Dan Fabulich: "Bayesian constraint by the evidence (was: Why believe the truth?)"

    Brett wrote:
    > Rafal Smigrodzki wrote:
    >
    >> Brett wrote:
    >>>
    >>> PS: If I *knew* what the greatest threat to survival to (say 150)
    >>> was for the average healthy 36 year old Australian male was that
    >>> might focus my energies wonderfully.
    >>
    >> ### UFAI.
    >>
    >> I think it could happen within the next 20 to 40 years, with a higher
    >> probability than the sum total of the prosaic causes of death killing
    >> you over the same time period.
    >
    > Un-friendly AI ? That *is* interesting.
    >
    > Given that a 36 year old Australian male (not thinking of anyone in
    > particular :-) would be 56 to 76 in the timeframe you nominate and
    > 76 is probably slightly over the average lifespan expected on the
    > tables bandied about at present, you really think unfriendly AI is
    > that big a risk?.

    ### If you are leading a healthy life, long-lived parents, and medical
    progress doesn't slow down, you should have a 70 - 80 % chance of making it
    to 80 years. As to the AI - see below.
    -------------------------------
    >
    > Not being an AI enthusiast of the pedigree of certain others
    > on this list I wonder:
    >
    > 1) What is the probability of General AI in the next 20 years
    > of *either* friendly or unfriendly variety? (I'm thinking about the
    > massive parallelism of brains and that maybe a subjective is
    > a necessary pre-requisite for "I" and might be not so trivial to
    > engineer.)

    ### My uneducated guess is 30 - 40%.

    ----------------------------

    >
    > 2) How would this probability be figured? What assumptions
    > are required? (I am an open-minded AI sceptic. But then I
    > am an "I" sceptic too so that's not saying a great deal.)

    ### Fuzzy thinking about Moravec, Moore, maybe some future markets. Is there
    a market future on the possibility of AGI available on ideafutures?
    -----------------------------------
    >
    > 3) Can friendly AI be built that would be competitive with
    > un-friendly AI or would the friendly AI be at the same sort
    > of competitive/selective disadvantage as a lion that wastes
    > time and sentiment (resources) making friends with zebras?

    ### Hard to tell. If some humans try to build FAI, we will have better
    chances than if nobody does, but I wouldn't place any bets yet.

    Rafal



    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 17:48:09 MDT