Re: greatest threats to survival (was: why believe the truth?)

From: Brett Paatsch (paatschb@optusnet.com.au)
Date: Thu Jun 19 2003 - 07:07:19 MDT

  • Next message: Olga Bourlin: "Re: OOPs. Re: Offlist Re: List Moderator Suggestions"

    Rafal writes:
    > Brett wrote:
    > > Rafal Smigrodzki wrote:
    > >
    > >> Brett wrote:
    > >>>
    > >>> PS: If I *knew* what the greatest threat to survival
    > >>> to (say 150) was for the average healthy 36 year old
    > >>> Australian male was that might focus my energies
    > >>> wonderfully.
    > >>
    > >> ### UFAI.
    > >>
    > >> I think it could happen within the next 20 to 40 years,
    > >> with a higher probability than the sum total of the
    > >> prosaic causes of death killing you over the same
    > >> time period.
    > >
    > > Un-friendly AI ? That *is* interesting.
    > >
    > > ..a 36 year old would be 56 to 76 in the timeframe you
    > > nominate and 76 is probably slightly over the average
    > > lifespan expected ...you really think unfriendly AI is
    > > that big a risk?
    >
    > ### If you are leading a healthy life, long-lived parents,
    > and medical progress doesn't slow down, you should
    > have a 70 - 80 % chance of making it to 80 years.

    20 - 30 % by 80 (assuming no major surprises).

    > As to the AI - see below.
    > -------------------------------
    > >
    > > Not being an AI enthusiast of the pedigree of certain
    > > others on this list I wonder:
    > >
    > > 1) What is the probability of General AI in the next
    > > 20 years of *either* friendly or unfriendly variety?
    > > (I'm thinking about the massive parallelism of brains
    > > and that maybe a subjective is a necessary pre-
    > > requisite for "I" and might be not so trivial to
    > > engineer.)
    >
    > ### My uneducated guess is 30 - 40%.

    Yep. In 20. So going higher over the following 20 yrs.

    So 20 - 30 % mortality by 80yrs with pretty low X factor.
    vs. 30 - 40 % mortality by 60yrs but with v.high X factor.

    X factor (unknown unknowns) seems too high for me to
    rationally chose to spend my time lobbying to stop AI
    (defensively), when my potential opponents would be
    numerous and hard to identify, rather than pursue anti-
    aging (pro-actively) when my potential allies are numerous
    and pretty easy to find.

    Perhaps a third option is to join the friendly AI race.

    But it would seem to make little sense to throw ones
    resources into joining such a race without a good reason
    for believing (A) that "friendly" AI is not inherently absurd
    and (B) that "friendly" AI is not going to be intrinsically at
    a disadvantage to UFAI. Perhaps one has to be a zebra
    to spend time trying to build a vegetarian lion?

    > ----------------------------
    >
    > >
    > > 2) How would this probability be figured? What
    > > assumptions are required? (I am an open-minded
    > > AI sceptic. But then I am an "I" sceptic too so that's
    > > not saying a great deal.)
    >
    > ### Fuzzy thinking about Moravec, Moore, maybe
    > some future markets.
    >
    > Is there a market future on the possibility of AGI
    > available on ideafutures?

    I'll leave that one to Robin.

    > -----------------------------------
    > >
    > > 3) Can friendly AI be built that would be competitive
    > > with un-friendly AI or would the friendly AI be at the
    > > same sort of competitive/selective disadvantage as a
    > > lion that wastes time and sentiment (resources)
    > > making friends with zebras?
    >
    > ### Hard to tell. If some humans try to build FAI, we
    > will have better chances than if nobody does, but I
    > wouldn't place any bets yet.

    I wonder about this. What if some person A's friendly AI
    turns out to be person B's unfriendly AI?

    Quite a bit seems to turn on the question of what is
    "friendly" in the *context* of an AI, where would the
    boundaries of it's "friendliness" end, or appear to various
    people to end.

    Weren't Oppenheimer and General Groves trying to
    build a 'friendly' nuke BEFORE Von Braun and
    Heisenberg's 'leader' built one that he'd regard with
    fondness?

    Regards,
    Brett Paatsch



    This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 07:16:12 MDT