Re: ExI principles: people left behind?

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Fri Jul 18 2003 - 20:55:40 MDT

  • Next message: Dehede011@aol.com: "Re: flame wars"

    On Fri, 18 Jul 2003, Eliezer S. Yudkowsky wrote (responding to
    my rather extreme utilitarian simplification of things):

    > Are you autistic?

    No, but I do seem to have a rather high Aspergers quotient so
    it may be easier for me to make such proposals than the average
    individual. (Careful how you phrase things -- you may be abusing
    a "disabled" person (me) -- lord knows what legal swampland that gets
    you into in the U.S. ...) semi - :-;

    > Historical villains have killed millions of people in
    > terrible causes, but the idea that it's too inconvenient to think about
    > the subject, and that dropping nukes would save time and aggravation, may
    > well represent a new low for the human species.

    Ah, but the debate must change if the "killing of millions of people"
    is in the name of a "good" cause. I do not note in your message
    a schema for the valuation of "lives". Say even an AI life vs.
    a human life. This is not a new debate -- it goes way back to
    the question of whether one has the right to kill (shut off, erase)
    ones copies -- (even *if* they have given you "well informed"
    permission to do so in advance).

    And the "valuation of lives" goes to the crux of the matter. Nick
    in his paper suggested there were some alternate perspectives
    of utilitarian evaluations, Anders expanded on this quite a bit in
    his comments (much to my education).

    But the problem is not simple and it doesn't go away (just because we
    find the discussion repulsive).

    I do agree that villains have abused their power and that millions of
    innocent people have died as a result. I would also probably agree
    that my suggestion would also result in similar negentropic casualties.
    But the point I am trying to get at is *when* the negentropic losses
    are acceptable? Is the saving of a single human life worth a sacrifice
    by humanity? In medicine this is known as "triage" -- and it involves
    some very difficult decisions as to how one optimizes who one saves.

    > I doubt you could kill a single human being at close quarters.

    Eliezer, given a few situations that I have been through in my life I have
    no doubt that I could kill a single human being at close quarters (or even
    multiple human beings) [these were primarily self-defense situations]. I
    will observe that this is a very different position from the one I held
    around your age (when I was fighting with my father to avoid returning my
    draft card to the government during the declining days of the Vietnam
    War). I would also note that in order to do this I would have to make a
    very short term analysis as to whether my life is worth more or less than
    the individual that might be killed. (I would be likely be willing to
    sacrifice my life if I thought the other individual had a more extropic
    vector.) It seems likely that that would be a very error prone process.
    But one has to base survival decisions on what one is given.

    > because knifing a person sets off our built-in instincts and pressing
    > a button does not.

    I was trying to go beyond that. I was trying to determine whether
    or not there is a moral framework for the net worth of human lives
    and whether that justifies a "way of being"? For example, the
    Buddhist perpective on "lives" provides a "way of being" -- the
    extropic principles may not (at least in some aspects). And perhaps
    more importantly the extropic perspective may *never* generate a
    schema that trumps the Buddhist perspective. That is why I raised
    the question of how one achieves the shortest path to ones goals.

    (Or being pragmatic -- we will not nuke anyone -- we will simply
    deny access to Life Extending Technologies to anyone who is clearly
    a "luddite" attempting to discourage the development of such LET.
    Someday those of us who support and use LET will triumph.)

    > Technological distance is emotional distance, as Dave Grossman put
    > it in "On Killing".

    I would disagree. If that were true I would not have contributed
    tens of thousands of dollars to The Hunger Project over two decades.
    I never met the people who may have been helped by my support.
    I simply supported them because it seemed like the right thing to do
    (i.e. it seemed extropic before I ever heard of ExI).

    > And how easy it is for people who can't distinguish word games from
    > reality to arrange a few thoughts in the right order and decide to
    > commit genocide. The human mind has no safety catch.

    I am not playing word games. My comment was very serious (though
    I may currently regret posting it). It was an effort to question
    "at what rate" and "how" do you want humanity to evolve?

    > Because you genuinely seem to be serious. I wish I could say I don't
    > understand it, but I do, and I'm sad, and frightened, because you were
    > someone I used to respect. Even if you don't understand what you're saying,
    > even if it has no connection to reality for you, you said it, and I can't
    > make it unreal to myself.

    If it is of any help, reframe it in terms of "can you erase your copies"?
    It seems to be a reasonable proposal that an evolving technological
    civilization that allows the erasing of copies would advance faster
    than one that does not (simply due to the expense of the memory
    requirements of preserving inactive copies -- ignoring the question
    of whether copies must be allowed some slice of the global CPU time).

    So making the great "leap" that one human is pretty much like another
    human (I mean really -- if a 1 cm^3 nanocomputer can support 100,000+
    human minds our "individuality" is probably overrated) one begins to
    get into the question of the "survival of humanity". This isn't a
    new topic -- it has been discussed by Robin in his "If Uploads Come
    First" paper (http://hanson.gmu.edu/uploads.html).

    All I am saying, and I am sad that it makes you "sad, and frightened"
    but someone has to face what I perceive as the spectre of the Pied Piper,
    is that the philosophy, belief system, what we promote, etc. may be
    very incomplete unless we deal with the fact that a society that
    allows the deletion of copies may out-evolve a society that does not.

    And particularly for you Eliezer -- I have not noticed (though I
    will admit not having read what you have written extensively --
    it is a rather great volume) any focus on addressing the issue
    of what one preserves as an AI evolves.

    Ok, it is reasonable to delete the memories and functional
    algorithms of an AI with the intelligence of someone with
    Down's syndrome but it is unreasonable to delete the memories
    and functional algorithms of an AI with the intelligence of
    Einstein.

    Where do you stand on the "Deep Blue" preservation effort?
    Should we not anticipate it being only a few decades before
    this software (?intelligence?) is lost forever?

    After all it did defeat the best human chess player in the world...

    Robert



    This archive was generated by hypermail 2.1.5 : Fri Jul 18 2003 - 21:05:07 MDT