Re: Who'd submit to the benevolent dictatorship of GAI anyway?

From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Fri Sep 05 2003 - 23:39:42 MDT

  • Next message: Damien Broderick: "Re: BOOKS: Has anyone read this book?"

    Anders writes:

    [Brett]
    > >...whether its Joe person,
    > > Joe the group or corporate or Joe the country the concerns
    > > are still the same.
    >
    > It still assumes that one group is the only one with access to
    > a certain technology.

    > .... the strong takeoff position claims that this is enough to
    > make the monopoly last indefinitely, but I'm not so sure about
    > that (or the strong takeoff).

    Me either I'm extremely sceptical but its worth considering the
    arguments for a strong take-off in my view for one to make
    good decisions about where it is prudent to spend ones time.

    > IMHO a much more plausible situation has AI of various
    > calibers in the hands of many groups, and Joe just happens
    > to have the most powerful one.

    Agreed. Though I'd note that Joe-groups structure may have
    some implications too. One Joe person might design a
    Joe-group (as a set of private corporation(s) but that's an
    aside.

    > What remains to be analysed is how much power that
    > translates into.

    Yes. Excellent question.

    > > Where in all this does the friendly bit of AI
    > > get to emerge that is genuinely universally friendly? Seems
    > > to me it quite possibly doesn't. Universal friendliness is not
    > > something that the AI developers get a premium for
    > > "growing from a seed".
    >
    > It seems likely that many would aim for something like
    > friendliness, but indeed not universal friendliness.

    Again, I'd like to hear the case well argued for the possibility
    of universal friendliness being designed in, from someone who
    thinks it can be. I thought Eliezer may be such a person but I
    am quite possibly misunderstanding his position. And that would
    be my own fault substantially for not having the time to come
    up to speed. It could also be Eliezer's position that friendliness
    is not garanteed to be engineerable but the risk of not checking
    it out (if it is) is very much not worth ignoring. The singularity
    could be a "black" (negative to humanity) singularity. (Apologies
    again Eliezer if I'm misunderstanding).

    > > > Maybe one could make an analysis looking at different "Gini
    > > > coefficients" of AI intelligence distribution.
    > >
    > > Sorry. Must plead ignorance to "gini coefficients" but some risk
    > > analysis sounds real healthy 'cause if it turns out that by running
    > > some fairly easy game theory notions that the first AI's are going
    > > to face a backlash because they are not human and can't be
    > > agents or treated as persons (where would society draw the
    > > line?) then we may do well to look at the consequences for this
    > > on any singularity take off time.
    >
    > I don't think game theory says anything about a backlash, that is just
    > psychology.

    Fair enough I think you and I probably have slightly different perceptions
    of the ambit of game theory, and that your usage is the more conventional.
    I'd include psychology and forms of political transaction analysis (some
    of my own orignial stuff -so far as I know - based on notions of binary
    and ternary logic) in the broader notion of "game theory" - i.e. what are
    the
    choices that players can actually make given a scenario and how does each
    players choice determine the choices of the other players- but again I
    digress.

    > As for the Gini coefficient, it is a measure of inequality.
    > It is zero for a population where everyone is equal (all AIs have the
    > same power) and one for a population of total inequality (one super AI,
    > zero others):
    > http://www.wikipedia.org/wiki/Gini_coefficient
    > http://mathworld.wolfram.com/GiniCoefficient.html
    >

    Thanks! I console myself, not very effectively, that Einstein
    struggled with math too :-( Probably from a higher base.

    > Imagine that having an AI of power P directly translates into social
    > power; the social power SP of group number i with an AI is
    > SP_i= P_i / sum_j P_j

    Can you please define your terms "power P" and social power SP
    more fully. Intelligence is a matter of some contention even amongst
    psychologist in relation to humans. Artificial intelligence as its
    approximation from various different directions must be more so.
    Can you be sure its meaningful to have a single "power P". I am
    wary that my math is likely to be marked inferior to yours but
    garbage in garbage out ;-)

    > If a group with social power can win over any group with lesser
    > power, then the most powerful group (lets call it 1, and order them
    > by power) [that] can run everything is P_1 > sum_j=2^N P_j.
    > If we assume the powers follow a power law distribution
    > P_j = c/j^k for some k>0 this can happen above a certain k=K,
    > where 2^(1-K)=K.
    > So there is a range of power distributions that is not amenable to
    > direct takeover.
    >
    > If we assume P grows exponentially over time (P'=lambda P)
    > for everyone there is no relative change in the analysis above,
    > which is very interesting. If we only count AI power of course
    > the first AI will be dominant even if it is Eliza; in reality social
    > power has a non-AI factor F which we have to add in:
    > SP_i = (P_i + F_i)/sum_j (P_j + F_j); it can be assumed to be
    > constant and we can scale it to 1 for simplicity. A bit of equation
    > fiddling does not seem to change things much.
    >
    > But if the rate of growth is more than proportional to P_i
    > (a Vingean singularity) or if the initial distrivution of the non-AI
    > power is extreme, then the potential of monopolies increase.
    >
    > This is worth looking into more detail, but now I have to go to
    > lunch.

    I agree, but I'm going to have to backfill some neglected math
    concepts to follow you properly above.

    Well at least you didn't do that little bit of analysis "before
    breakfast" ;-)

    Seriously, I see the question of how likely "friendly" and
    "unfriendly AI" is too emerge, if at all, as crucial to planning
    and I imagine others like me would too.

    I feel I can plan quite effectively, assuming only Moore's law
    to say 2012, and the development of expert systems with no
    requirement that they be friendly or generally intelligent at all
    anytime soon. i.e.. I want better protein folding grunt and a few
    other neat IT tools but I don't need super general AI. And
    frankly at this stage super general AI presents to me as more
    of a threat than a benefit. It seems likely to end up in the
    wrong hands and turn distinctly unfriendly even with the best
    of intent by those working on it.

    Regards,
    Brett



    This archive was generated by hypermail 2.1.5 : Fri Sep 05 2003 - 23:45:33 MDT