Re: who cares if humanity is doomed?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Mar 11 2003 - 11:49:38 MST

  • Next message: Lee Daniel Crocker: "Re: The paradoxical nature of freedom (was Re: Do patents really foster innovation?)"

    Lee Corbin wrote:
    >
    >>If those individuals are AIs or post-humans or such,
    >>is that any worse than if those individuals are humans?
    >>I don't see why. But maybe I'm unusual in having more
    >>sentience-loyalty than species-loyalty.
    >
    > Don't you see the frightful logic that you are allowing
    > yourself to carelessly embrace? It's as though Montezuma
    > and his Indios friends had said, "It will be good if we
    > continue to live, but does it really matter if the Spanish
    > live instead, and we die?

    I also have sentience-loyalty rather than species-loyalty. Look at my
    email address, for heaven's sake. Unfortunately I turned out to be dead
    wrong in believing that any intelligence would be necessarily moral or
    even necessarily sentient. Our intelligence and our morality *are*
    related, very deeply related; the people who think that morals and
    intelligence are separate magisteria are, as our intuitions insist, wrong.
      The problem is that knowing this does not tell you what the relation
    between intelligence and morality actually *is*. Okay, I finally know. I
    haven't written it up yet, but I finally figured it out. The relation
    between seed AI theory and Friendly AI theory turns out to be something
    like the relation between special and general relativity. But while
    intelligence is an inextricable part of sentience it may be part of other
    things as well. Marcus Hutter's AIXI-tl formalism specifies a computer
    program that is superhumanly effectual, unalterably hostile, and
    nonsentient. It would take mastery of AI morality, on the same level as
    what it takes to create Friendly AI, to create a mind that was *not*
    completely worthless.

    The things that matter to us are a small, small target requiring exceeding
    precision to hit. No more arbitrary, and no less valuable, but a small,
    small target. There was a time when I thought otherwise. But, if you
    recall, I never claimed, during that time, to know what the hell I was
    trying to hit. Well, now I know and it's a small, small target.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Tue Mar 11 2003 - 11:58:18 MST