Re: Why would AI want to be friendly?

From: James Rogers (jamesr@best.com)
Date: Wed Sep 06 2000 - 06:45:38 MDT


On Wed, 06 Sep 2000, Eugene Leitl wrote:
> James Rogers writes:
>
> > I don't think this is an accurate characterization. The rules don't
> > change at all; there is still fierce competition among intelligent,
> > self-directed entities. All that happens is that the competition moves
> > out of the grossly physical domain for the most part.
>
> We're still pretty far from an equilibrium, still being in a
> spontaneous expansion process of civilisation into the wilderness. The
> grossly physical part may well come back with a vengeance (see the
> emergence of neoplagues and pests for a shade of things potentially to
> come), when we're nearing the more sustainable/equilibrium part of the
> development. Assuming the constraints of finite concentration of
> matterenergy in spacetime must always hold, things must eventually
> plateau. Before things might pass through a sequence of bottlenecks,
> e.g. if we can't expand freely into space, after having covered the
> planet with a thick crust of manmade artefacts and people.

I agree that we aren't there yet, but in a hundred years we'll most likely
either be there or be dead. Without thinking about it too much, I would
say that it is almost required that the intelligence curve needs to pick up
soon, or we humans will end up brute-forcing ourselves into an unpleasant
situation.

  
> > The capability to project outcomes of actions as a result of increased
> > intelligence and knowledge is most likely responsible for this shift.
> > The ability to accurately forecast the costs/benefits for a broad range of
>
> Of course, by expressing the planned behaviour, we're collectively
> changing the state of the system, and hence introduce an additional
> uncertainty. The others constitute a major part of the fitness
> function, which has to move when their strategies move. For instance,
> all the fat dotcom fishes in a tiny pond create a lot of ruckus, and
> muddy the waters. Mutually making planning more difficult. Time for
> some dynamite fishing ;)

On an individual basis the system appears chaotic, but to me it seems much
more regular on a macroscopic basis. Those dotcom fish may muddy their
pond, but the ocean looks as clear as it ever was. It is easier to
project outcomes for a system than for individual players in a system (in
a fractal sort of way). Also, the limits of resolution, at least for
these types of things, has a lot to do with the extent of
intelligence/information and computational power available. It is
probably possible to resolve the dotcom ruckus, but it would require
resources that may not currently exist. I would also agree that as the
resolution increases, so does the uncertainty, although not necessarily at
the same rate or in the same manner.

 
> > potential actions would encourage more subtle and less costly
> > manipulations than brute force to achieve the same effective results in a
> > competitive environment. It also allows one to recognize losing
>
> All that assuming smart agents. They are not the only one on the
> stage. Dumb agents still pretty much brute-force.

True, and we are starting to see the beginning of a mixture of the two in
the world on a more global scale. However, it requires a command economy
to even have a chance of brute-forcing a flexible, mobile economy with
excellent intelligence assets. And the window in which even this is
possible is shrinking (already closed?).

Note that this is directly applicable to warfare (we can use it as a model)
and can be seen in the evolution of armaments and battlefield tactics.
Heavy armored assets are only really valid in the context of insufficient
battlefield intelligence and to a lesser extent, mobility. Armor and
strategies of attrition compensate for poor situational intelligence by
providing a hedge against the possibility of unexpected/unknown
weaponry in the former, and by bulldozing possible targets of unknown
strength and character in the latter with excessive force. The extra
expense is that resources are expended on speculation of the mere
probability that they may be needed, whether they actually are or not. In
organizations that have good intelligence assets, mobility becomes much
more valuable than hardness as you can find well-described targets quicker
and avoid danger easier with better situational awareness, hence the
migration away from armored ground assets towards air assets in modern
militaries. The best defense is a good offense, *if* you have good
situational intelligence. A linear increase in battlefield intelligence
allows the use of existing mobility in ways that exponentially increase
the cost of aggressing with relatively poor intelligence assets. Even in
cases where weaponry is equivalent in capability, situational intelligence
is an enormous force multiplier. With sufficiently large deltas in
situational intelligence, qualitative and quantitative differences in the
armaments themselves become largely irrelevant (e.g. the Gulf War).

As intelligence technologies (both in the information gathering and
computational sense) improve, the amount of brute force required to
overcome increasingly intelligent and mobile adversaries will grow
geometrically. Right now, we are at the very beginning of this curve.
Very soon it will be economically unfeasible consider brute force
capability as a significant competitive asset. In my opinion.

Of course, this also suggests the (rather obvious) consequences of
unleashing an AI on the world...

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:21 MDT