RE: The Future of Secrecy

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Fri Jun 20 2003 - 18:03:58 MDT

  • Next message: Brett Paatsch: "Harvard, MIT Team Up to Explore Genomic Frontier"

    Robin wrote:

    >
    > Even when creatures share many design elements, I expect much
    > larger differences in raw abilities than among humans today. I
    > expect minds to vary by many orders of magnitudes of speed, memory,
    > etc, and bodies to very greatly by the environment they are most
    > suited for. And minds will vary more in how much they know about
    > and have specialized software for particular topics.
    >
    > Thus we cannot remotely hope for peace via similarity. If your
    > favored route to keeping the peace today is via people noticing
    > how similar people are, and using political systems that ignore
    > differences, you'd better accept that this approach cannot last.
    > Of course we might hope for peace via rationality and self-interest.
    >
    ### The idea of universality of ethics (even if it means paring it down to
    the most limited set of rules), as well as the concept of reciprocity among
    moral agents equally endowed with basic rights, have been the bedrock of
    most ethical philosophizing since the very beginning (with the exception of
    aberrations as environmentalism and some flavors of animal rights
    movements). It was tenable as long as all sentients were reasonably similar
    but as you point out, the appearance of radically different structures will
    put these ideas to the test. The concept of the Friendly AI is skirting the
    problem, assuming that the FAI will never become an independent moral actor
    (as long as the supergoal of Friendliness guides its behavior), although it
    will be an independent moral philosopher. The problems posed by the
    coexistence of radically enhanced uploads and unchanged humans will have an
    even higher level of complexity. The den Otters of this world see it as a
    harbinger of a brutal struggle, with the strong destroying the weak. After
    all, rationality and self interest could prompt some of the top-level
    cooperators (capable of verifying each other's reliability in cooperation)
    to decide that minds below a certain level will not be afforded any rights
    (not even self-ownership, or autonomy). Today, many of us tend to treat a
    person's behavior towards beings weaker than him as a litmus test for future
    cooperation - somebody who kicks his dog might kick us if given more power.
    This rewards benevolent behavior towards entities which differ from us. Yet,
    the cooperators capable of directly verifying reciprocal honesty would not
    need to rely on such proxy measures - and maybe, as den Otter imagines,
    would stomp on things lower down on the ladder of complexity. Is this a real
    danger?

    Rafal



    This archive was generated by hypermail 2.1.5 : Fri Jun 20 2003 - 15:19:42 MDT