Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Thu Sep 04 2003 - 09:09:14 MDT

  • Next message: Brett Paatsch: "Re: Who'd submit to the benevolent dictatorship of GAI anyway?"

    Adrian Tymes writes:

    > --- Brett Paatsch <bpaatsch@bigpond.net.au> wrote:
    > > Personally, I don't see myself doing so *voluntarily*
    > > especially when any benevolence, real or alleged
    > > would be a matter still to be determined at least so
    > > far as I was concerned.
    > >
    > > Or am I missing the point here? How *would* a single
    > > super general AI actually benefit? Would it have
    > > *no* political power but say instantly suggest optimal
    > > game theoretical solutions to otherwise intractable
    > > problems or is it the super inventor that cares nothing for
    > > intellectual property rights?
    >
    > At first, of course, the AI would have to work through
    > human agents at some level, if for nothing else than
    > to hook it up to things that can affect the real
    > world.

    It seems so. It the emergent intelligence scenario the AI either
    accumulates resources say net resources without paying for
    them in order to grow, a practice that we'd normally regard
    as hostile and theft were humans to do it.

    In the seed AI scenario, someone will initially made the buys
    or sells on the AI's behalf as it won't be a legal person and
    presumably won't be able to trade. This means the AI is
    likely to be treated as owned by the proxy and the proxy's
    friendliness rather than the AI's would seem to be the point.

    >The AI's "magic" is not in generating some
    > real world influence from nothing, but in leveraging
    > even a tiny bit of real world influence.
    >
    > Consider, for instance, an AI that could correctly
    > guess the next hour's trading on Wall Street with 99+%
    > accuracy, with an online daytrading account starting
    > at $1,000 as some researcher's experiment. With the
    > resulting string of "good days", this money would grow
    > exponentially, eventually allowing the AI to become
    > the majority stockholder in several corporations.

    The first AI would have to buy the stocks through proxy.
    That proxy has quite a lot of power. One persons
    ideological utopia is often another peoples nightmare
    scenario. The wealth and power go back not to the AI
    (and non person under law) but to its human proxy.

    > Granted, the stocks would all be in its host
    > researcher's name, but the AI would also presumably be
    > familiar enough with said researcher to gain
    > cooperation - voluntary or not, knowing or not - in
    > setting the AI itself up as the mouthpiece through
    > which orders are given.

    But how would the rest of the world react. AI's human
    proxy Joe is getting super rich wiping out his competitors
    and outcompeting everyone with a rate of change that they
    can't match. This looks to the losers like an arms race or
    marketing war against Joe who is using the AI as a tool.

    Politically it is very hard to see people standing for it
    and not attacking Joe or trying to counter Joe and his
    pervieved "tool" AI.

    > A bit more capital would
    > allow it to become the sole stockholder in at least
    > some of these cases, streamlining the process.

    Not it Joe, it proxy. It can't own shares its not a person.

    > Purchase orders could then be used to acquire real
    > goods (likely among them: distributed hosts for the
    > AI).
    >
    > Or consider the super inventor you described above,
    > who works with its creators to develop product ideas
    > and optimal markets, forming the basis of a "miracle
    > works" corporation that acquires wealth in the normal
    > fashion but greatly accelerated. And then the AI,
    > having taken over financial management as well ("I am
    > a calculator, and I have downloaded all the applicable
    > laws; what savings or benefits would we accrue by
    > hiring an accountant?"), has a few ideas for how to
    > spend a portion of the wealth it has helped generate.
    >

    <snip>

    >There is,
    > of course, the problem of rebellion against the law,
    > but we're presupposing an AI more than capable of
    > learning from history - including the classic problem
    > of dictators mismanaging things and trying to patch
    > over things with fear or propaganda rather than fixing
    > the true problem - and of fixing things so there's
    > very little if anything that people have any desire to
    > rebel against (and of maintaining enough humility to
    > make sure this really is the case, rather than just
    > deluding itself or having its agents delude it).

    Yeah but what about the people that see the AI using
    Joe as his buyer (and Joe is really getting rich and
    powerful btw and Joe is not designed to be friendly
    in all folks terms). My point is Joe and his AI are
    likely to face a strong political backlash.

    Aren't they?

    Brett



    This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 09:15:59 MDT