Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Sep 05 2003 - 04:08:54 MDT

  • Next message: BillK: "How to Practice Safe Computing"

    (some analysis at the end)

    On Fri, Sep 05, 2003 at 03:18:19PM +1000, Brett Paatsch wrote:
    >
    > Anders Sandberg writes:
    >
    > > Isn't the assumption that only Joe has such a tool a rather unlikely
    > > assumption?
    >
    > Absolutely, I think I was simplifying. But whether its Joe person,
    > Joe the group or corporate or Joe the country the concerns are
    > still the same.

    It still assumes that one group is the only one with access to a certain
    technology. In general this is not true. Even for nuclear weapons the
    monopoly was short-lived; the strong takeoff position claims that this
    is enough to make the monopoly last indefinitely, but I'm not so sure
    about that (or the strong takeoff). IMHO a much more plausible situation
    has AI of various calibers in the hands of many groups, and Joe just
    happens to have the most powerful one. What remains to be analysed is
    how much power that translates into.

    > Where in all this does the friendly bit of AI
    > get to emerge that is genuinely universally friendly? Seems to me
    > it quite possibly doesn't. Universal friendliness is not something
    > that the AI developers get a premium for "growing from a seed".

    It seems likely that many would aim for something like friendliness, but
    indeed not universal friendliness.

    > > Maybe one could make an analysis looking at different "Gini
    > > coefficients" of AI intelligence distribution.
    >
    > Sorry. Must plead ignorance to "gini coefficients" but some risk
    > analysis sounds real healthy 'cause if it turns out that by running
    > some fairly easy game theory notions that the first AI's are going
    > to face a backlash because they are not human and can't be
    > agents or treated as persons (where would society draw the
    > line?) then we may do well to look at the consequences for this
    > on any singularity take off time.

    I don't think game theory says anything about a backlash, that is just
    psychology. As for the Gini coefficient, it is a measure of inequality.
    It is zero for a population where everyone is equal (all AIs have the
    same power) and one for a population of total inequality (one super AI,
    zero others):
    http://www.wikipedia.org/wiki/Gini_coefficient
    http://mathworld.wolfram.com/GiniCoefficient.html

    Imagine that having an AI of power P directly translates into social
    power; the social power SP of group number i with an AI is
    SP_i= P_i / sum_j P_j
    If a group with social power can win over any group with lesser power,
    then the most powerful group (lets call it 1, and order them by
    power) can run everything is P_1 > sum_j=2^N P_j. If we assume the
    powers follow a power law distribution P_j = c/j^k for some k>0 this can
    happen above a certain k=K, where 2^(1-K)=K. So there is a range of
    power distributions that is not amenable to direct takeover.

    If we assume P grows exponentially over time (P'=lambda P) for everyone
    there is no relative change in the analysis above, which is very
    interesting. If we only count AI power of course the first AI will be
    dominant even if it is Eliza; in reality social power has a non-AI
    factor F which we have to add in: SP_i = (P_i + F_i)/sum_j (P_j + F_j);
    it can be assumed to be constant and we can scale it to 1 for
    simplicity. A bit of equation fiddling does not seem to change things
    much.

    But if the rate of growth is more than proportional to P_i (a Vingean
    singularity) or if the initial distrivution of the non-AI power is
    extreme, then the potential of monopolies increase.

    This is worth looking into more detail, but now I have to go to lunch.

    -- 
    -----------------------------------------------------------------------
    Anders Sandberg                                      Towards Ascension!
    asa@nada.kth.se                            http://www.nada.kth.se/~asa/
    GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
    


    This archive was generated by hypermail 2.1.5 : Fri Sep 05 2003 - 04:16:16 MDT