Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Thu Sep 04 2003 - 23:09:17 MDT

  • Next message: Brett Paatsch: "Re: Who'd submit to the benevolent dictatorship of GAI anyway?"

    Robert J. Bradbury writes:
     
    > On Fri, 5 Sep 2003, Brett Paatsch wrote:
    >
    > > [snip] But if folks will go out of their way to vote just
    > > to keep people they don't like out of power and if they
    > > are already concerned about jobs, how is it that the
    > > AI with its human proxies (who'd also be massive
    > > beneficiaries of its wealth creation strategies and ability
    > > to draw better future maps etc) would not evoke a huge
    > > backlash?
    >
    > Brett, it depends to some extent on whether AI and
    > nanotech coevolve. AI+Nanotech allows people to live
    > essentially for "free" (perhaps even advanced biotech can
    > pull that rabbit out of the hat).

    Theoretically and potentially free maybe, but that is not the
    only possible future some folks envision when they think of
    AI and Nanotech. Eg. Terminator 3 and Michael Creighton's
    Prey.

    > AIs alone -- if they are "owned" (now we get to start an
    > AI "slavery" discussion) and skilled and in demand might
    > allow one to live for free as well.

    Hmm, depends whom they are owned by doesn't it. My picture
    of corporate mindsets is that they are not only concerned with
    current profits but with future profits and like to think in terms
    of return on investment and how much of the market share of
    the market potential they may capture. A competitor corp with
    an AI may be a hair raiser for a corporation that sees its potential
    market growth suddenly looking to be competed away.

    Now take the thinking up a notch. Politicians like to give their
    constituents jobs, countries that can own AI first and compete
    internationally can bring more of the wealth home. AI may
    potentially create the means to allow one (or even all to live free)
    but is that the potential future that is *likely* to emerge given
    that some folks, individuals, groups, corporations, or governents
    are likely to see their investment in these enabling technologies as
    their means of ensuring a standard of living for their own
    constituents. Big tech jumps seem to start of technological arms
    race.
     
    > Alternatively, nanotech alone, if there are a sufficient number
    > of nanotechnologists and/or computers doing the design work
    > might allow one to live for free. So there need not be a
    > "huge backlash".

    Agree there need not need be. But I think its a good idea to
    consider what the *likely* emergence of such tech in the hands
    of some but not all is going to be politically, and economically,
    not just what it might or could be ideally.

    Ideally relatively safe nuclear power have been welcomed as
    a great thing too.

    >
    > I also tend to disagree that one needs human proxies to
    > support most AI work. One does need the computer
    > resources and interfaces to reality but after that I think
    > humans are out of the loop.

    Ah now were into legal issues like who can be your agent
    and to whom you can't grant power of attorney. Clearly
    no one yet has granted agency or power of attorney to
    a non biological AI. The law isn't ready imo.

    > I only need to interface my AI to my bank and/or broker
    > accounts and then after that I can ignore it except to check
    > up on things from time to time. Of course one has to trust
    > the AI and the interfaces but one would hope we are
    > getting better at developing such things.

    I think we are and willl get better but the issue is partly how
    and at what rate. The point above is that the people your
    AI is dealing with may not be willing to accept it as your
    agent or acknowledge its has power of attorney for you.

     
    > However there are significant risks that arise if an amoral
    > AI cracks a virtual private network -- such as the Kazaa
    > network and installs itself on millions of machines. The last
    > month has clear demonstrated that security holes provide
    > the means for viruses and worms to capture millions of
    > machines in a brief period of time.

    Yup.

    >
    > How long before someone produces an evolving virus/worm,
    > perhaps akin to the various evolving SPAM messages, that
    > can defeat the anti-virus filters? Put an amoral AI on top of
    > that and you potentially have a *real* problem.

    This is a good technical question for the AI buffs. What sort of
    threat is this really. How highly should we factor this risk. It
    would seem to me that a modern military that takes war as far
    as have psychological warfare as a specialty and has internet
    specialty teams is already moving in that risk space to some
    degree. Note it is not necessary to posit a malevolent government
    to posit a government concerns with the economic and military
    consequences of technolgy used against it.

    > One way to look at the script-kiddies of today is
    > to view them as really limited AIs. So look at the problems
    > they cause and then imagine what happens if they transfer their
    > limited intelligence into the millions of machines that are
    > vulnerable.

    Or take it up a coupla notches and don't think mischievous kiddies
    but think patriotic dedicated military planners and government
    agencies concerned to forge the best possible standards of living
    for their citizens. It has not escaped my attention that the vernacular
    of many strategic marketers is almost the same as the vernacular of
    war.

    Regards,
    Brett



    This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 23:20:37 MDT