Re: greatest threats to survival (was: why believe the truth?)

From: Brett Paatsch (paatschb@optusnet.com.au)
Date: Tue Jun 17 2003 - 23:20:09 MDT

  • Next message: Olga Bourlin: "Re: Rand and IRAQ"

    Eliezer S. Yudkowsky writes:
    > Rafal Smigrodzki wrote:
    > >
    > > ### For all the decades of unmet expectations, AI relied
    > > on computing power of the order of an ant, and only
    > > recently, as Moravec writes, did they graduate to the
    > > computing power of a mouse. Since AI on ant-powered
    > > computers gave ant-powered results, and AI on mouse-
    > > powered computers gives mouse-powered capacities
    > > (such as target tracking, simple learning, simple motor
    > > control), we may expect that AI on human-level computers
    > > will give human-level results. Human-level computing power
    > > is going to be available to SingInst in about 15 years, so we
    > > can expect the recursive self-enhancement of the FAI to
    > > take off around that time.
    > >
    > > QED?
    >
    > No, unfortunately, as far as I can tell, we have *enough*
    > computing power available for AI now. Yes, right now.
    > *More* computing power will make it *easier*, again
    > unfortunately so. At least with current computing power
    > it should still be fairly *hard* for the standard flounder-
    > around style of AI to get anywhere.

    Unfortunately so? It seems that you are not particular
    confident that Friendly AI is the most likely AI to emerge
    Eliezer. That there are substantial risks in having more
    potential AI "creators" running amok. Perhaps it is my relative
    ignorance that is causing me to not see the risks in the same
    way that you do. Perhaps you can educate some of us
    who are inadequately on guard.

    To me the word "friendly" has connotations of sociability
    and I see most (perhaps even all) sociable creatures as
    sociable out of evolutionary need.

    I wonder what friendly AI could mean operationally in the
    context of an AI.

    Consider sociopaths. They are good observers of patterns
    and are able to simulate empathy which they apparently
    don't feel. I think there has been research that shows that
    all children learn to lie but the brighter ones learn earlier
    (crudely speaking).

    If one meets or even creates an ostensibly friendly AI
    how does one *know* it is *truly* friendly?

    I think an interesting opening question to that AI might be
    what does "friendly" mean?

    Its also possible that in relation to friendly AI humans
    might have different notions of friendliness. For some a
    friendly AI may be one constantly vigilant to the emergence
    of unfriendly AI's and engaged in chasing foolish childlike
    humans away from technologies that may result in such.
    For others this sort of paternalistic protection would not
    seem like friendliness but a curtailment of their liberties.

    Seems the question might become "friendly" by whose
    terms. My hypothetical Rottweiler might be very "friendly"
    and protective to me but not so "friendly" to cats or other
    people it regards as a threat to me.

    Human draw boundaries in places that are not entirely
    logical. We may for instance grant human rights to a
    comatose "vegetative" person on life support but not to
    a healthy chimpanzee. How would a "friendly" AI
    (neither human nor chimp) respond in the allocation of
    resources and the enforcements of rights I wonder. And
    more to the point do you think, however it acted, its
    actions would be regarded as "friendly" by all people?

    Regards,
    Brett Paatsch

     

     

    .

     
     

     



    This archive was generated by hypermail 2.1.5 : Tue Jun 17 2003 - 23:28:11 MDT