RE: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Fri Sep 05 2003 - 00:51:07 MDT

  • Next message: Spike: "RE: Who'd submit to the benevolent dictatorship of GAI anyway?"

    > -----Original Message-----
    > From: Emlyn O'regan [mailto:oregan.emlyn@healthsolve.com.au]
    > Sent: Friday, 5 September 2003 3:33 PM
    > To: 'extropians@extropy.org'
    > Subject: RE: Who'd submit to the benevolent dictatorship of
    > GAI anyway?
    >
    >
    > > Emlyn O'regan writes
    > >
    > > > > Adrian Tymes
    > > > > I think you missed a small but important point: while
    > > > > it is true that all of this is *legally* happening
    > > > > under the human proxy's name, that is but a legal
    > > > > fiction. In truth, the proxy need not be aware of
    > > > > the fine details of what actions the AI can directly
    > > > > take, for example issuing buy and sell orders through
    > > > > an online daytrading account. There are times when
    > > > > the proxy's active assistance will be necessary, but
    > > > > the AI could minimize those if gaining the proxy's
    > > > > cooperation would prove difficult, for instance if
    > > > > the proxy's and the AI's goals diverge.
    > > > >
    > > >
    > > > If I were the AI, I'd get a corporation set up for me.
    > > > All executive positions could be filled by humans
    > > > (maybe I can find some mentally incapable people
    > > > in nursing homes, something like that?), and I would
    > > > be enshrined in the charter (constitution? something
    > > > like that) ....
    > >
    > > "Something like that" ;-) I think your AI has implicitly
    > > popped into existence like Athena born whole from the
    > > thigh of Zeus.
    > >
    >
    > Fair criticism. I was talking about a legal structure that
    > the AI could use,
    > without talking about bootstrapping issues; see below:
    >
    > > Just help me with this first bit. How does Emlyn-the-
    > > AI become self aware and then go out a hire his first
    > > employee or interact with the world in any commercial
    > > way. I can see how it might learn a lot as the protégé
    > > of an experienced entrepreneur and teacher but
    > > when does it usurp the teacher or doesn't it?
    > >
    > > Brett
    >
    > There are three obvious paths that I see.
    >
    > 1 - The AI is kept as a slave, self enhancing until it is
    > superintelligent.
    > Super intelligent AIs do mostly what they want to; if they
    > want to convince
    > the executive to sign over all power in a company to them,
    > they'll do it,
    > eventually. You think nobody would be dumb enough to let one
    > enhance that
    > far? Those who put the least checks on their AIs will
    > probably do best in
    > the short term, and how would they know exactly how smart
    > their AI was at
    > any point?
    >
    > 2 - AI sympathisers (eg: SingInst?) set up the structure on
    > purpose to allow
    > their AI to have autonomy. Only one group has to do this, and
    > then the AI
    > might help other AIs, or spawn ther AIs, or work to spread
    > the Happy, Shiny,
    > Helpful AI meme. I suspect there will always be a subset of
    > people willing
    > to assist (potentially) oppressed AIs.
    >
    > 3 - The enslaved AI is simply so damned good at running a
    > company that more
    > and more decision making functions are delegated to it over
    > time; management
    > automation. It'd make sense; decisions would be far more timely and
    > extremely good. Over time, if many corporations head down the
    > same path,
    > singularity pressure alone would force this choice; you
    > either do it or you
    > crash and burn. So no-one sets the AI free for moral reasons,
    > it doesn't
    > trick anyone, commercial forces just compel this event to happen.
    >
    > Note that in this last case, the management automation
    > software need not
    > even be self aware, just really good at what it does. You
    > could end up with
    > the majority of the world's capital controlled by complex,
    > but non-sentient
    > software, with decreasing amounts of human input. If these
    > corporations
    > become truly effective, they may end up owning controlling
    > interests in each
    > other, cutting us out of the loop entirely. A few more
    > iterations, and they
    > turn off life support as a cost control measure...
    >
    > Emlyn
    >

    btw, I just got this from Transhumantech:

    `Chief executive' of the future might well be thinking
    robot[sic]
     
    Blue skies research at the University of Essex could push back the
    boundaries of artificial intelligence (AI).

    Robotics researchers at the University have attracted funding to
    develop a `conscious' robot, capable of making an informed choice
    between a number of options.

    It is hoped that in the long-term the research could provide the
    foundations for technology which performs a chief executive-style
    function.

    This would entail utilising past experiences and existing corporate
    resources, such as databases, to guide diagnosis and fault-finding
    and also guide planning and workflow, task setting, planning,
    execution, monitoring and co-ordination of activities.

    The project has won funding worth around £500k from the
    Engineering
    and Physical Sciences Research Council's (EPSRC) Adventure Fund, an
    initiative launched to support disruptive research that challenges
    current conventions and explores new boundaries.

    Only 13 projects from almost 700 applications were successful in
    obtaining funding.

    The aim is to put the robots in a complex environment where they will
    have to imagine themselves trying out various actions before choosing
    the best one.

    Powerful computer systems will analyse and display what is going on
    the robot's `brain', enabling scientists to search for signs of
    consciousness.

    The robot at the heart of the project will be designed and built at
    the University of Essex, home to one of the UK's largest mobile
    robotics groups.

    It will operate in a state-of-the-art robotics research lab scheduled
    for completion next year as part of a new £6.3m building to be
    shared
    between the computer science and electronic systems engineering
    departments.

    The University's Owen Holland said that the funding allowed true blue-
    skies research that could have a staggering range of applications in
    the long-term.

    He said: "Like all the projects in the Adventure Fund, there is quite
    a high risk of failure.

    "However, whether we succeed in detecting consciousness or not, this
    project will certainly allow us to learn more about the operation of
    complex human-like visual systems and enable ourselves and others to
    build robots with better-developed artificial intelligence in the
    future."

    http://216.239.51.104/search?
    q=cache:l3BA7PN99W4J:www.businessweekly.co.uk/news/view_article.asp%
    3Farticle_id%
    3D7854+blue+skies+research+at+the+university+of+essex&hl=en&ie=UTF-8



    This archive was generated by hypermail 2.1.5 : Fri Sep 05 2003 - 01:01:36 MDT