Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Thu Sep 04 2003 - 09:16:11 MDT

  • Next message: Brett Paatsch: "Re: Who'd submit to the benevolent dictatorship of GAI anyway?"

    Jef Allbright <jef@jefallbright.net> writes:

    [Sorry if I am repeating points here in my answer to Adrian]

    > Brett Paatsch said:
    >
    > > Could be the only way a hyper intelligent AI can kick start
    > > a rapid take off singularity is against the wishes of a majority
    > > of voters. i.e. by brute military and or economic force and
    > > through human proxies. That was my thought anyway.
    > >
    > > Do others see a problem with this reasoning or conclusion?
    >
    > It wouldn't be a matter of submitting to a dictatorship. It
    > would be a matter of choosing to go along with overwhelmingly
    > successful forces affecting your world, or suffering the natural
    > consequences of resisting or being left behind.

    It seems like Joe the human proxy who is allowed to make purchases
    etc using the AI's judgement perhaps because Joe reared the AI
    is a substanial competitive threat. Society is likely to see Joe as
    equipped with a new form of super tool, that could undermine a
    lot of power basis rather than automatically move to grant the AI
    personhood (as in say Bicentential Man) or even a back account
    where the AI can sell services in its own right.

    Wouldn't governments concerned about jobs and the economy
    be tempted to move in to counter Joe's tool? Or perhaps to
    compulsorily acquire it. The AI has no human rights and it would
    be scaring a lot of folks that it was outperforming as would Joe
    its intermediary who can trade or own IP. The temptation to
    take out the AI with industrial espionage may be very strong.

    Regards,
    Brett



    This archive was generated by hypermail 2.1.5 : Thu Sep 04 2003 - 09:25:51 MDT