RE: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Fri Sep 05 2003 - 00:02:55 MDT

  • Next message: Adrian Tymes: "Re: The right 10 people"

    > Emlyn O'regan writes
    >
    > > > Adrian Tymes
    > > > I think you missed a small but important point: while
    > > > it is true that all of this is *legally* happening
    > > > under the human proxy's name, that is but a legal
    > > > fiction. In truth, the proxy need not be aware of
    > > > the fine details of what actions the AI can directly
    > > > take, for example issuing buy and sell orders through
    > > > an online daytrading account. There are times when
    > > > the proxy's active assistance will be necessary, but
    > > > the AI could minimize those if gaining the proxy's
    > > > cooperation would prove difficult, for instance if
    > > > the proxy's and the AI's goals diverge.
    > > >
    > >
    > > If I were the AI, I'd get a corporation set up for me.
    > > All executive positions could be filled by humans
    > > (maybe I can find some mentally incapable people
    > > in nursing homes, something like that?), and I would
    > > be enshrined in the charter (constitution? something
    > > like that) ....
    >
    > "Something like that" ;-) I think your AI has implicitly
    > popped into existence like Athena born whole from the
    > thigh of Zeus.
    >

    Fair criticism. I was talking about a legal structure that the AI could use,
    without talking about bootstrapping issues; see below:

    > Just help me with this first bit. How does Emlyn-the-
    > AI become self aware and then go out a hire his first
    > employee or interact with the world in any commercial
    > way. I can see how it might learn a lot as the protégé
    > of an experienced entrepreneur and teacher but
    > when does it usurp the teacher or doesn't it?
    >
    > Brett

    There are three obvious paths that I see.

    1 - The AI is kept as a slave, self enhancing until it is superintelligent.
    Super intelligent AIs do mostly what they want to; if they want to convince
    the executive to sign over all power in a company to them, they'll do it,
    eventually. You think nobody would be dumb enough to let one enhance that
    far? Those who put the least checks on their AIs will probably do best in
    the short term, and how would they know exactly how smart their AI was at
    any point?

    2 - AI sympathisers (eg: SingInst?) set up the structure on purpose to allow
    their AI to have autonomy. Only one group has to do this, and then the AI
    might help other AIs, or spawn ther AIs, or work to spread the Happy, Shiny,
    Helpful AI meme. I suspect there will always be a subset of people willing
    to assist (potentially) oppressed AIs.

    3 - The enslaved AI is simply so damned good at running a company that more
    and more decision making functions are delegated to it over time; management
    automation. It'd make sense; decisions would be far more timely and
    extremely good. Over time, if many corporations head down the same path,
    singularity pressure alone would force this choice; you either do it or you
    crash and burn. So no-one sets the AI free for moral reasons, it doesn't
    trick anyone, commercial forces just compel this event to happen.

    Note that in this last case, the management automation software need not
    even be self aware, just really good at what it does. You could end up with
    the majority of the world's capital controlled by complex, but non-sentient
    software, with decreasing amounts of human input. If these corporations
    become truly effective, they may end up owning controlling interests in each
    other, cutting us out of the loop entirely. A few more iterations, and they
    turn off life support as a cost control measure...

    Emlyn



    This archive was generated by hypermail 2.1.5 : Fri Sep 05 2003 - 00:13:30 MDT