RE: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Fri Sep 05 2003 - 02:43:55 MDT

  • Next message: Anders Sandberg: "Re: Who'd submit to the benevolent dictatorship of GAI anyway?"

    I think you've mistaken me for someone who thinks friendliness is doable.
    All I was presenting were some fairly easy paths to legally self-sufficient
    AIs (or at least to AIs who are citizens for all intents and purposes),
    without requiring human masters, by having the AIs controlling
    corporations...

    > -----Original Message-----
    > From: Brett Paatsch [mailto:bpaatsch@bigpond.net.au]
    > Sent: Friday, 5 September 2003 5:20 PM
    > To: extropians@extropy.org
    > Cc: sentience@pobox.com
    > Subject: Re: Who'd submit to the benevolent dictatorship of
    > GAI anyway?
    >
    >
    > Emlyn O'regan writes:
    >
    > > [brett]
    > > > Just help me with this first bit. How does Emlyn-the-
    > > > AI become self aware and then go out a hire his first
    > > > employee or interact with the world in any commercial
    > > > way. I can see how it might learn a lot as the protégé
    > > > of an experienced entrepreneur and teacher but
    > > > when does it usurp the teacher or doesn't it?
    > >
    > > There are three obvious paths that I see.
    > >
    > > 1 - The AI is kept as a slave, self enhancing until it is
    > > superintelligent. Super intelligent AIs do mostly what
    > > they want to; if they want to convince the executive to
    > > sign over all power in a company to them, they'll do it,
    > > eventually. You think nobody would be dumb enough
    > > to let one enhance that far? Those who put the least
    > > checks on their AIs will probably do best in the short
    > > term, and how would they know exactly how smart
    > > their AI was at any point?
    >
    > slave/tool/pet. If it achieve recognition as a "slave" its legal
    > battle is probably largely over. I DO think some folks
    > would be "dumb" enough to let one enhance that far in
    > terms of general intellectual power if one was a corporate
    > exec that perhaps inherited a "seedling" that ones
    > predecessor had been playing with for R&D and that one's
    > researchers had developed the AI over time and taught to
    > do something useful in a sort of expert system MIS manner
    > for sure.
    >

    The returns would just be sooo good! ... until the world comes falling down
    around one's ears ...

    > I grant your point on the premiums for risk taking on the
    > checks and balances, but an AI that produces a commercial
    > or military or political return would seem to be selected for,
    > or specifically encouraged to learn in that direction, rather than
    > one that was just "friendly".

    Absolutely. Friendliness is extremely unlikely.

    > I can see the CEO saying "screw
    > the "friendly" modifications or lessons or extra rule handling
    > routines or whatever" (sorry for over simplifying Eliezer) and
    > "just get junior AI here hooked up to the news services and
    > the stock markets information. When he shows promise
    > in that direction duplicate him if you can and experiment
    > on various ways to make the duplicates outperform each
    > other commerciall." The same guy that encourages junior AI
    > to develop (gives it electricty and resources ie. hardware)
    > when junior can't fend for itself is the guy that has a subjective
    > sence of how "friendly" junior AI needs to be. It needs to be
    > apparently friendly by his, the ceo lights, not by any other
    > more general criteria of friendly.
    >
    > > 2 - AI sympathisers (eg: SingInst?) set up the structure
    > > on purpose to allow their AI to have autonomy. Only one
    > > group has to do this, and then the AI might help other AIs,
    > > or spawn ther AIs, or work to spread the Happy, Shiny,
    > > Helpful AI meme. I suspect there will always be a subset
    > > of people willing to assist (potentially) oppressed AIs.
    >
    > Shades of the abolitionists in the US pre the Civil War.
    > But with no disrespect to Eliezer or the singularity institute.
    > I'd still want to make up my own mind on the friendliness
    > or otherwise of any artificial intelligence purported to be
    > better at looking after everybody's own good then I'd be
    > at looking after my own good without surrendering any
    > of my personal "sovereignty" to it.

    This is all individuals and small groups acting on their own
    cognisance; nobody else gets a vote at this stage.

    >
    > Perhaps I'm missing something fundamental in the notion
    > of "friendly". Perhaps there is some trick for making it
    > universal that someone has cottoned on to that is not
    > just *their* notion of how a friendly AI should behave
    > and be directed by its goals but it actually "objectively"
    > friendly? -but that seems tricky - help Eliezer ?

    Don't worry, just wait... it *will* convince you that it is friendly (if it
    wants to), no matter if it is or not.

    >
    > > 3 - The enslaved AI is simply so damned good at
    > > running a company that more and more decision making
    > > functions are delegated to it over time; management
    > > automation. It'd make sense; decisions would be far
    > > more timely and extremely good. Over time, if many
    > > corporations head down the same path, singularity
    > > pressure alone would force this choice; you either do it
    > > or you crash and burn. So no-one sets the AI free
    > > for moral reasons, it doesn't trick anyone, commercial
    > > forces just compel this event to happen.
    >
    > Yep. Ala Moravec's Robot and I think Damien's Last Mortal
    > Generation.
    >
    > > Note that in this last case, the management automation
    > > software need not even be self aware, just really good at
    > > what it does.
    >
    > An expert system.

    AI might even emerge from cobbled together, ever more skilled expert systems
    and other AI-ish bits and pieces. Drexler talks about this, doesn't he?

    >
    > > You could end up with the majority of the world's capital
    > > controlled by complex, but non-sentient software, with
    > > decreasing amounts of human input. If these corporations
    > > become truly effective, they may end up owning controlling
    > > interests in each other, cutting us out of the loop entirely.
    > > A few more iterations, and they turn off life support as a
    > > cost control measure...
    >
    > Ah I didn't follow the bit where the expert system or better
    > expert systems amongst competing ones grabbed most of
    > the worlds assets without running into the regulators of other
    > countries etc.

    Well, the expert systems probably still have figure head humans in executive
    positions all the way along. It just becomes impossible for those humans to
    intervene, and everyone's wealth is too dependant on the machines to be able
    to pull the plug. By the time it's a problem, the regulators are using them
    same systems, in any case (how else can they keep up?).

    >
    > But more to the point aren't you making a case for a sort
    > of expert system become general AI with particular skills
    > that gets good at "appearing" friendly and bountiful to its
    > stakeholders, the shareholders in its owning corp, or is
    > actually really friendly to eveyone somehow, in which
    > case its perhaps not doing its best by the shareholders?
    > Goal conflict?

    I'm not making any case for friendliness. Shareholders wont give two brass
    wazoos for friendliness, even toward themselves, as long as they are making
    money.

    >
    > I'm back to my concept of speaking to this AI thats
    > telling me, "trust me, I know better than you, I've been
    > well brought up to have no alliances, and to be universally
    > altruistic, so I'm REAL friendly, and you should just do as I
    > say and as quickly as you can and all will be A-Ok."
    >
    > Hmm. Metaphorically speaking -Mr Serpent - I'm not
    > sure I'd like those "apples".
    >
    > Brett

    Oh sure, you'll like 'em. He's very, very convincing, after all.

    Emlyn



    This archive was generated by hypermail 2.1.5 : Fri Sep 05 2003 - 02:54:49 MDT