Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Thu Sep 04 2003 - 23:18:19 MDT

  • Next message: Brett Paatsch: "The right 10 people. Re: Who'd submit to the benevolent dictatorship of GAI anyway?"

    Anders Sandberg writes:

    > > Brett Paatsch wrote:
    > >
    > > It seems like Joe the human proxy who is allowed to make purchases
    > > etc using the AI's judgement perhaps because Joe reared the AI
    > > is a substanial competitive threat. Society is likely to see Joe as
    > > equipped with a new form of super tool, that could undermine a
    > > lot of power basis rather than automatically move to grant the AI
    > > personhood (as in say Bicentential Man) or even a back account
    > > where the AI can sell services in its own right.
    > >
    > > Wouldn't governments concerned about jobs and the economy
    > > be tempted to move in to counter Joe's tool?
    >
    > Isn't the assumption that only Joe has such a tool a rather unlikely
    > assumption?

    Absolutely, I think I was simplifying. But whether its Joe person,
    Joe the group or corporate or Joe the country the concerns are
    still the same.

    > It only works if you assume that such AIs are the result of
    > an unlikely breakthrough. It is IMHO far too close to a Hollywood
    > meme for comfort. I would rather expect that in any setting there
    > would be a range of AIs. Joe's would just be the smartest, but
    > there would be AIs working for the government, companies and
    > other people too and they

    I won't repeat all my comment in my reply to Robert, but this is in
    fact largely my point. There will probably be a range of AI's and
    their will be a bit of a technological arms race once the notion that
    AI's are producable and can give economic advantage to their
    holders catches on. Where in all this does the friendly bit of AI
    get to emerge that is genuinely universally friendly? Seems to me
    it quite possibly doesn't. Universal friendliness is not something
    that the AI developers get a premium for "growing from a seed".

    >
    > Maybe one could make an analysis looking at different "Gini
    > coefficients" of AI intelligence distribution.

    Sorry. Must plead ignorance to "gini coefficients" but some risk
    analysis sounds real healthy 'cause if it turns out that by running
    some fairly easy game theory notions that the first AI's are going
    to face a backlash because they are not human and can't be
    agents or treated as persons (where would society draw the
    line?) then we may do well to look at the consequences for this
    on any singularity take off time.

    Regards,
    Brett



    This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 23:24:45 MDT