Re: Radical Suggestions

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jul 25 2003 - 11:26:14 MDT

  • Next message: Brendan Coffey: "Re: Greenpeace takes on nanotech, AI & robotics"

    John K Clark wrote:

    > "Lee Corbin" <lcorbin@tsoft.com>
    >
    > > Suppose that I suddenly found myself in the year 1936 about
    > > fifty miles outside Berlin, and I had in my hand a remote
    > > control switch that would detonate a Hiroshima-sized device
    > > in the capital of Germany, and that I knew that this would
    > > be my only chance to kill Hitler and his henchmen.
    > > I would scarcely hesitate, even though it would mean the
    > > immediate deaths of 100,000 people.
    >
    > In 1936 you would know that Hitler was a very bad person but the trouble is
    > you would not know that very soon he would cause the death of 30 million
    > people; nor in 1936 would you know if incinerating the German capital would
    > lead to something even worse than Hitler by demonstrating to the world
    > 9 years early that nuclear weapons are possible and practical. Even today I
    > don't know.

    Yes, ethical questions like "Would you go back in time and kill Hitler as
    a five-year-old?" are very much along the lines of "Why don't you buy the
    winning lottery ticket, 1,5,31,38,47,3, and feed starving children with
    it? It would just cost a dollar! What kind of heartless bastard are
    you?" The uncertainty is the *whole point*. The real question is, "Would
    you, as a general policy, kill all five-year-olds who had done as much net
    wrong as the five-year-old Hitler at that point?"

    Despite the plausibility, appealing intuitiveness, and ease of imagination
    of the story, the protagonist of such a time-travel adventure *is not
    human* with respect to ethical decisions - that whole mode of existence
    and choice is alien to our own world of strictly forwards causality.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Fri Jul 25 2003 - 11:37:02 MDT