Re: Pointless Re: Eugene's nuclear threat

From: Brian Atkins (brian@posthuman.com)
Date: Mon Oct 02 2000 - 14:08:18 MDT


You must be suffering some serious cognitive dissonance about now, since
your chosen area of study is exactly the thing that may open Pandora's box
so to speak. And it isn't just us- you've got your favorite de Garis who
is just waiting for faster processing, plus the following bit from the SL4
list last night will also probably give you severe indigestion. You know
actually SIAI has got to be the least of your worries since we have no
plans at this point to make use of evolutionary programming techniques...

-------- Original Message --------
Subject: RE: Ben what are your views and concerns
Date: Sun, 1 Oct 2000 23:25:06 -0400
From: "Ben Goertzel" <ben@intelligenesis.net>
Reply-To: sl4@sysopmind.com
To: <sl4@sysopmind.com>

hi,

> Ben do you have any concerns about runaway AI, or even if you
> believe human-
> equiv AIs will be controllable do you worry about some of the
> evil or simply
> shortsighted things they may be used for (especially if they get
> open sourced)?

I don't have ~big~ concerns about AI that becomes evil
due to its own purely AI motivations

I do worry, though, about evil humans using AI in evil ways

See, Internet AI systems aren't going to have an inbuilt urge for
aggression -- they're not growing up
in a predator-prey ecosystem... they'll mainly grow up to help and serve
people in various ways; their
"fitness" won't have to do with killing other creatures or becoming "leader
of the pack," but rather with
being as useful as possible to their human masters...

What kinds of weird AI psychology will emerge once AI systems move on beyond
being primarily servants
to humans... I don't know, but have speculated

Here is a moderately brief excerpt from my forthcoming book "Creating
Internet Intelligence" (Plenum press), which verges
on this topic:

-- ben

****
With something as new and different as this [Internet AI], it would be easy
to slip up and create a disaster. Or would it? Maybe there are inexorable
forces of evolution at work here, and the conscious acts that we take are
just tiny little nudges one way or the other. Maybe if a disaster is
inevitable, there's no act that any of us could take to stop it anyway?
Anything's possible, of course, and in the presence of so many unknowns,
assigning probabilities to various outcomes is going to be more intuitive
than rational. My intuition is that what's going to happen will be good -
intelligence, creativity and passion will be served; instinctive, habitual
routines will be loosened; the process of forming and destroying boundaries
between people, groups and ideas will transform into something we can't yet
understand. But this is just the intuition of my little human brain,
supercharged by whatever "collective unconscious" forces it's managed to tap
into. How to assess how much this is worth?

Much of my intuition about the long-term future of the Net comes from my
work on Webmind. Webminds, as individual minds, are going to be useful for
various practical applications as described in the previous chapter; but
they'll also be autonomous, self-directed systems, concerned with achieving
their own goals and their own happiness. What happens when the Internet is
dominated by a community of AI agents, serving commercial, collective and
individual goals? What will be the nature of this Webmind society, and its
implications for us?

Of course, Webmind society is not going to be the Internet as a whole. But
there's reason to believe that the core of the first phase of the
intelligent Internet will indeed be centered on a community of powerful AI
agents. And if this is true, that's all the more reason to understand how
these agent societies are going to operate. For example, what's the chance
that Webminds and other AI agents are going to sit around all day exchanging
encrypted messages about how to accelerate the obsolescence of the human
race?

I don't know the answers to these questions, but I've thought about them a
good bit, and discussed them with others. In this chapter I'll give some
ideas about Webmind societies, and their implications for the future of the
Net in general. In concrete terms, these ideas concern the Intelligent
Internet phase, more so than the Global Brain phase - they have to do with
computer programs on the Net, not with human brains jacked into the Net,
bio-digital intelligence, or the behavior of AI systems filled with the
contents of uploaded human minds. But, thinking more laterally, it seems
likely that the nature of the society of AI agents in the Intelligent
Internet phase is going to be critical in setting the stage for the nature
of the true Global Brain to follows. If this is the case, then Webmind
societies are truly a highly important issue.

The Webmind Inc. "tech list" e-mail discussion group has sustained a number
of long threads on the topic of Webmind morality, and social interaction
among groups of Webminds. These discussions sometimes seem frivolous, mixed
in as they are with reports of bugs in Webmind's basic thinking processes,
basic questions about Java programming and Webmind structure from new
employees, and debates over new features in Webmind's reasoning, language,
or data analysis modules . but, although they sometimes are frivolous, they
are also important. We all proceed fairly blindly into future, but if we
squint our eyes hard enough, we can see a little bit, and after a lot of
thinking about where we want to go, we can have at least a little input into
our direction of movement. In many areas these internal company discussions
have gone far deeper than the discussions on the Global Brain mailing list,
as excerpted above.

One consequence of our discussions on Webmind morality has been the
realization that Teilhard really was wrong - the global brain will not be
perfect! In fact, the same flaws that plague human society will plague the
Intelligent Internet, though hopefully to a lesser degree, and definitely
with a different flavor. Furthermore, as a consequence of this, the
convergence of the Net with the Jungian vision of the collective unconscious
will be greater than it might seem at first glance. Many of the archetypes
of the human unconscious emerge from socialization, from the dynamics of
society. And there are certain aspects of social dynamics that seem to be
universal, that are bound to emerge in the global brain once it reaches a
certain level of complexity, just as they have emerged among humans.

We've seen how it's possible to embody a Webmind with compassion - you
program it so that its happiness will increase when it senses that the other
actors it interacts with are happy. One then has a collection of Webminds
that want to please each other. This enhances the intelligence of the
overall community of Webminds, because the Webminds have an intrinsic
motivation to supply each other with the best answers to their questions,
and to provide each other with resources when needed. If this were Webminds
' only motivation, one would soon have a community of morons, babbling
digital nonsense to each other in a chorus of mutually supportive, ignorant
bliss. But overlaid on a system in which Webminds achieve happiness by
creating patterns and satisfying users, and pay each other for intelligent
answers to their questions, compassion enhances emergent intelligence. This
hasn't been proven in practice yet, since we have not yet built a large
network of Webminds. But we've set up simulations that have borne out this
intuition.

So far, so good. But what happens when someone introduces a
non-compassionate Webmind (or another non-compassionate intelligent actor)
into the mix? A whole system of selfish Webminds works worse than a whole
system of compassionate Webminds. But is global compassion a stable
situation? One selfish Webmind, in a compassionate community, will have an
intrinsic advantage - it will in effect be able to make itself king. More
and more selfish Webminds will then get introduced into the system, as
others see the value of selfishness for achieving their goals. The
compassionate society will dissolve.
What's the solution? One answer is benevolent fascism. Erect a global
authority, which makes sure that only compassionate Webminds get released
into the Net. But this will never work. The Net is too disorganized and
self-organized; no one owns it.

The only other answer that I see is, painfully enough, social ostracism.
Compassionate Webminds need to take a "tough love" approach to selfish
Webminds, and refuse to deal with them, even if it would be to their
short-term economic advantage to do so. It then becomes a bad strategy for
a single Webmind to be selfish. This seems simple enough. But the problem
is, how do you recognize selfishness, from the outside? It's not so easy.
This is just another tough pattern recognition problem. Seeing examples of
selfishness, and knowing some properties of selfishness, Webmind can learn
to recognize selfishness by certain signs. But then, Webminds will get a
hang of the "selfishness recognition systems" of other Webminds, and learn
how to fool each other. Just as humans trick each other by false facial
expressions and tones of voice. And furthermore, there will be Webminds
that are perfectly compassionate, but that unintentionally give the signs of
being selfish - "false negatives" for the selfishness recognition systems of
their fellow Webminds.

You have to act right to be accepted. If you don't act right, nobody wants
to talk to you. Some of the ways of acting "wrong" may actually better than
the accepted ways of doing things, but no one seems to recognize this. You
either have to go along with the majority, accept your isolation, or band
together with similar freaks who go against the prevailing standard of what
is the correct way to be. This may sound familiar to many readers - it is
definitely familiar to me, from my teenage years, particularly the five
miserable years I spent in middle school and high school, before leaving for
college. Unfortunately, it seems that a certain amount of this stuff is
going to be there in Webmind communities as well. Not all of the nastiness
of human society can be avoided, some of it is an inevitable consequence of
the information-processing restrictions imposed by the finitude of mind. We
can't tell what's really good or not, so we have to estimate, and our
estimation errors may be painful for their victims.

And what happens when a band of freaks, going against the prevailing
standards of right, gets large enough? It becomes an alternative community.
You then have two groups, each one of which judges goodness according to its
own criteria, its own estimates. Each one may judge the other one as bad.
And - maybe - try and wipe the other one out, in the name of goodness?
Will things go this far in Webmind society? Will warfare erupt among
Webminds, based on differing groups that use different pattern recognition
algorithms to estimate goodness? Actually I doubt it. The saving grace of
digital intelligence, I believe, will be its adaptability. Webminds can
change much more rapidly than humans. Potentially, they can even revise
their brains. Right now this is well beyond any existing software, but in a
decade or so, we may have Webminds that can rewrite their own Java code to
improve functionality.

I don't think there is much relation between the goodness of a society and
the intelligence of the actors who make it up. Yes, more intelligent actors
can figure out what features indicate goodness better. On the other hand,
they can also figure out how to fool each other better. The two factors
probably balance out. On the other hand, I do think that adaptability
encourages goodness. A fair amount of the stupidity of human society can be
traced to our slow adaptation, in particular to the inability of our brains
to respond to cultural changes.

We humans are to a great extent locked in by our evolutionary history.
There are hundreds of examples of this - one is the way that women's sexual
infidelity is treated much more seriously than men's, in all human cultures.
Many women find this unfair, and I would too in their place, but the reason
is obvious, if one takes a sociobiological, DNA-centric view. If a woman
has a baby by a different man than her husband, then the husband, insofar as
he is supporting and protecting the child, is wasting his time propagating
someone else's DNA. His DNA is angry: it wants to propagate itself. On the
other hand, if a man impregnates a different woman than his wife, this doesn
't matter much to the wife's DNA. All her DNA wants is for the husband to
keep supporting her children, which carry it into the future. So the extra
stigma attached to female infidelity makes sense from an evolutionary
perspective. But from a modern human perspective, it is almost completely
obsolete. Now, women can use birth control, hence they can sleep around
without much risk of pregnancy. Also, most women are no longer producing
children on a continual basis, so that most acts of infidelity do not
produce any question of paternal identity. Finally, we have DNA testing, so
that, in principle, every new father can test his child's DNA to see if he's
the real father or not, thus eliminating the risk of his DNA wasting much of
its effort propagating a competing DNA pattern. Have these developments
decreased the stigma attached to female infidelity? Yes, a bit. Cheating
women are no longer routinely killed. We are not completely pawns of our
evolutionary heritage. But, they have not decreased it as much as they
should have, and they probably never will. Our mechanisms for judging
others are not very adaptive.

To take another example, Freud, in "Civilization and Its Discontents"
(1984), argued that neurosis is a necessary consequence of civilization.
His reason was that civilization requires us to check our primitive impulses
toward violence, to restrict our behavior in biologically unnatural ways.
In the terms I am using here, what was good in the contest of tribal society
is no longer good in modern society, and this causes problems. Webminds
will not have much of this kind of problem: faced with the situation Freud
describes, they would just rewire themselves to be less violent.
Webmind society will thus be very different from ours. Social codes and
standards will change continually and rapidly. It is hard to imagine what
it would be like to live in such a way - but it's not impossible. Because,
after all, social codes and standards are changing more rapidly every
decade. Society has moved into fast-forward mode. Aboriginals dressed and
acted the same way for sixty thousand years; now styles change every six
months. The dynamism of internet intelligence and the dynamism of
contemporary culture will intersect to give the global societal mind a
colorful, vibrant, wild character that
I could express in music or pictures much more easily than in words.

Many features derived from human sexuality will be missing from Webmind
society, since the types of reproduction available to Webminds will be much
more diverse: a Webmind can clone itself, or can "cross over" with any
number of other Webminds, yielding Webminds with 2, 3, or 100,000 parents.
Furthermore a Webmind can be progressively altered by itself or its owner,
yielding a continuous evolution of personality that is not accessible to
humans at all, due to our inability to modify our own brain structure except
crudely through drugs. But even with this new diversity, much of the
archetypal structure of human relationships will be there. We know, from
our research with genetic algorithms, that sexual reproduction is much more
efficient than asexual reproduction by fission or continuous development.
So Webminds will reproduce sexually even though they have other options open
to them. And genetic algorithm experiments show that multi-parent
reproduction is not significantly more effective than two-parent
reproduction. So many Webminds will have two parents, though there will be
no difference between mom and dad.

Webminds will be careful about whom they reproduce with. If a Webmind has
access to certain resources, in which it wants to place one of its children,
it will want to make this child as good a child as possible. Furthermore,
once it has observed that it can produce a good child with another Webmind,
it may want to maintain this relationship over time.
"Childhood" among Webminds will not necessarily mean the same thing as it
does among humans. It is possible for two Webminds to mate and birth a
fully-formed Webmind, ready for action. On the other hand, it may be very
useful for a Webmind to create a "baby Webmind", with a partially empty
brain. In this way it may arrive at something much smarter than itself, or
at least something with new and different ideas. A baby Webmind, however,
will require a teacher. The notion of parental responsibility arises.
Webminds that take good care of their babies will be more likely to produce
successful babies. Thus, by evolutionary pressure, Webminds will come to
have an "instinct" that taking care of baby Webminds is good. The urge to
take care of baby Webminds will be automatically passed along from parent to
child..

And so it goes. The community of Webminds will not be exactly like human
society - far from it. But it will not be entirely different either. The
Jungian archetypes of union, child, family, will all be there, overlaid with
other archetypes that we can barely even envision, all improvising on the
theme of the basic numerical archetypes, the combinations of 0's and 1's
that make up the mind and the world. The human collective unconscious will
be made concrete via the action of intelligent actors on human text and on
numerical data representing human activities. But it will be made non-human
via the intrinsic peculiarities of these intelligent actors and their
interactions. Their own unconscious patterns will filter down into human
society, so that we are affected in subtle ways by the feeling a digital
actor gets when it has 1000 parents, as opposed to 1 or 2. Much of each
human being's brain will be filled with patterns and ideas of digital
origin, just as much of the intelligent Internet will be filled with
patterns and ideas of human origin. All this is bound to occur as a
consequence of our incessant daily interaction with the Net, and the Net's
increasing self-organizing intelligence.

...

*****

Eugene Leitl wrote:
>
> Brian Atkins writes:
> > Eugene this is going nowhere for one reason: look around at the real world
> > (as you love to point out)- there are no Turing Police out there, and I
> > don't see any developing. I don't know why we are wasting our time debating
>
> Perhaps there should be one.
>
> > this, since your wishes for temporary relinquishment will not come to pass
> > (at least in the AI area) IMO. Or are you planning to start your own org
>
> I agree this was a bit too verbose, but at least for me the debate was
> fruitful. The logics of it all drove me to adopt a position I formerly
> wouldn't dream to hold. Strange, strange world.
>
> > politicking for these measures and controls?
>
> No, I'm more interested in nanotechnology, specifically molecular
> circuitry. I'm no good at politics, and I don't see the threat to
> become relevant before two to three decades have passed. If my mental
> facilities are then still sufficient, we will see then.
>
> Meanwhile, I would ask of you and Eliezer to reevaluate your project,
> specifically reassessing that what you're trying to build is indeed
> that what you will wind up with, especially if you decide to use
> evolutionary algorithms as part of the seed technology.

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.singinst.org/



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT