Re: Pointless Re: Eugene's nuclear threat

From: Eugene Leitl (
Date: Wed Oct 04 2000 - 04:18:38 MDT

Brian Atkins writes:
> You must be suffering some serious cognitive dissonance about now, since
> your chosen area of study is exactly the thing that may open Pandora's box
> so to speak. And it isn't just us- you've got your favorite de Garis who

Sure, but we'll need that hardware for human uploads, and this is
about the only thing which can save our ass on the long run. I'm not
to be held responsible (notice the relatively (and
uncharacteristically) radical attitude towards misuse of certain
technologies as demonstrated in recent sequence of posts) if that
hardware is to be put to wrong use by irresponsible people. (And,
yeah, fat chance for me actually having any measurable impact, so

Notice that molecular cicuitry of the self-assembly kind is quite
useless for gray goo type of technologies (thought it *is* applicable
for virus design, alas. But efficient bioweapon contermeasures are
well within reach of current or near future technologies, as DNA
screening, sterile bareers and quarantining).

> is just waiting for faster processing, plus the following bit from the SL4

My favourite? Hardly, he's just the one focusing on technologies most
likely to succeed. Eliezer can add de Garis to the Evil People list,
his artilect essays do seem to indicate the usual Moravecian form of
mental derangement (mind children uber alles).

> list last night will also probably give you severe indigestion. You know

Bring on the good stuff. I'm stocked on Lefax.

> actually SIAI has got to be the least of your worries since we have no
> plans at this point to make use of evolutionary programming techniques...
So Eliezer said, and I'm glad to hear it. However, you may wind up
with introducing evolutionary algorithm use via the back door, when
code is rewriting code, hence my appeal to continuously revise the
goals and implementation of the project. You will keep it offline,
will you?

For the record, I don't think SIAI is a considerable threat, both
because of the limit of resources (you won't start writing Net worms,
will you?) and focus on brittle AI and declared ban on evolutionary
algorithms. The real dangers lie with researchers like de Garis,
especially military research, and renegade authors of Net worms using
evolutionary algorithms. I assume this has a high threshold on both
resource use and delayed onset, due to self-mod software self
bootstrap problem.

> Subject: RE: Ben what are your views and concerns
> Date: Sun, 1 Oct 2000 23:25:06 -0400
> From: "Ben Goertzel" <>
> I don't have ~big~ concerns about AI that becomes evil
> due to its own purely AI motivations
> I do worry, though, about evil humans using AI in evil ways

That's weird, my concerns run exactly the other way round. You can't
use a god. That's a lot like the deermouse under the bush yonder
trying to use you.
> See, Internet AI systems aren't going to have an inbuilt urge for
> aggression -- they're not growing up
> in a predator-prey ecosystem... they'll mainly grow up to help and serve

Yeah? Look at system security, it's a jungle out there.

> people in various ways; their
> "fitness" won't have to do with killing other creatures or becoming "leader
> of the pack," but rather with
> being as useful as possible to their human masters...
If we're talking about the positive autofeedback self-enhancement
process occuring in the future Net matrix, being as useful as possible
to their (former) human masters will be the last thing on their
agenda. Surviving sysadmins hunting them and pulling down hardware
from under their feet will be their typical environment. This selects
for stealth, cunning and optimal resource (CPU and bandwidth) use.

> What kinds of weird AI psychology will emerge once AI systems move on beyond
> being primarily servants
> to humans... I don't know, but have speculated
> Here is a moderately brief excerpt from my forthcoming book "Creating
> Internet Intelligence" (Plenum press), which verges

Oh, god. Another candidate for the "Evil People" list.

> on this topic:
> -- ben
> ****
> With something as new and different as this [Internet AI], it would be easy
> to slip up and create a disaster. Or would it? Maybe there are inexorable
> forces of evolution at work here, and the conscious acts that we take are
> just tiny little nudges one way or the other. Maybe if a disaster is
> inevitable, there's no act that any of us could take to stop it anyway?

A yet another defaetist. Sorry, don't buy it. He might want to lie
down in the path of the Juggernaut, me, I have a sudden important
business to attend elsewhere.

> Anything's possible, of course, and in the presence of so many unknowns,
> assigning probabilities to various outcomes is going to be more intuitive
> than rational. My intuition is that what's going to happen will be good -

I wouldn't rely on anybody's intuition in matters like this. This
needs cold, hard numbers. Considerable modelling resources invested.

> intelligence, creativity and passion will be served; instinctive, habitual
> routines will be loosened; the process of forming and destroying boundaries
> between people, groups and ideas will transform into something we can't yet
> understand. But this is just the intuition of my little human brain,
> supercharged by whatever "collective unconscious" forces it's managed to tap
> into. How to assess how much this is worth?
Using a broken bit of pencil and a piece of paper, no less. Some assessment.

> Much of my intuition about the long-term future of the Net comes from my
> work on Webmind. Webminds, as individual minds, are going to be useful for

Another fancy word for AI, I presume. Grabbing credit by pushing
catchy neologisms. Webmind, artilect, you name it. A spade is not an
entrenching tool, it's a spade.

> various practical applications as described in the previous chapter; but
> they'll also be autonomous, self-directed systems, concerned with achieving
> their own goals and their own happiness. What happens when the Internet is
> dominated by a community of AI agents, serving commercial, collective and
> individual goals? What will be the nature of this Webmind society, and its
> implications for us?
Something what we can't possibly imagine, since being outside of a
single human's scope of perception.
> Of course, Webmind society is not going to be the Internet as a whole. But
> there's reason to believe that the core of the first phase of the
> intelligent Internet will indeed be centered on a community of powerful AI
> agents. And if this is true, that's all the more reason to understand how
> these agent societies are going to operate. For example, what's the chance
> that Webminds and other AI agents are going to sit around all day exchanging
> encrypted messages about how to accelerate the obsolescence of the human
> race?
Huh? That time would be better invested in self enhancement. The
obsolescense of biology would then emerge as a side effect.
> I don't know the answers to these questions, but I've thought about them a
> good bit, and discussed them with others. In this chapter I'll give some
> ideas about Webmind societies, and their implications for the future of the
> Net in general. In concrete terms, these ideas concern the Intelligent
> Internet phase, more so than the Global Brain phase - they have to do with
> computer programs on the Net, not with human brains jacked into the Net,
> bio-digital intelligence, or the behavior of AI systems filled with the
> contents of uploaded human minds. But, thinking more laterally, it seems
> likely that the nature of the society of AI agents in the Intelligent
> Internet phase is going to be critical in setting the stage for the nature
> of the true Global Brain to follows. If this is the case, then Webmind
> societies are truly a highly important issue.

Somebody seems to be getting lost in his mental constructs. Seems like
a regular job hazard in these circles <sheepish grin>.
> The Webmind Inc. "tech list" e-mail discussion group has sustained a number
> of long threads on the topic of Webmind morality, and social interaction
> among groups of Webminds. These discussions sometimes seem frivolous, mixed
> in as they are with reports of bugs in Webmind's basic thinking processes,
> basic questions about Java programming and Webmind structure from new
> employees, and debates over new features in Webmind's reasoning, language,
> or data analysis modules . but, although they sometimes are frivolous, they
> are also important. We all proceed fairly blindly into future, but if we
> squint our eyes hard enough, we can see a little bit, and after a lot of
> thinking about where we want to go, we can have at least a little input into
> our direction of movement. In many areas these internal company discussions
> have gone far deeper than the discussions on the Global Brain mailing list,
> as excerpted above.
More entries for Eliezer's Evil People list.
> One consequence of our discussions on Webmind morality has been the
> realization that Teilhard really was wrong - the global brain will not be
> perfect! In fact, the same flaws that plague human society will plague the

Gimme a break, Teilhard de Chardin was a friggin member of a Church
order, not a cognition expert.

> Intelligent Internet, though hopefully to a lesser degree, and definitely
> with a different flavor. Furthermore, as a consequence of this, the
> convergence of the Net with the Jungian vision of the collective unconscious

Ok, he's gone completely overboard now.

> will be greater than it might seem at first glance. Many of the archetypes
> of the human unconscious emerge from socialization, from the dynamics of
> society. And there are certain aspects of social dynamics that seem to be
> universal, that are bound to emerge in the global brain once it reaches a
> certain level of complexity, just as they have emerged among humans.
More meaningless garbage.

> We've seen how it's possible to embody a Webmind with compassion - you
> program it so that its happiness will increase when it senses that the other
> actors it interacts with are happy. One then has a collection of Webminds

A yet another programmer with an idee fixe. "I can program the Moon
from the skies, and program the stormy seas to grew calm". Here, have
another Xanax.

> that want to please each other. This enhances the intelligence of the
> overall community of Webminds, because the Webminds have an intrinsic
> motivation to supply each other with the best answers to their questions,
> and to provide each other with resources when needed. If this were Webminds

Why not leave this to a proven method: co-evolution? If they're so
damn smart, why can't they figure out how to cooperate?

> ' only motivation, one would soon have a community of morons, babbling
> digital nonsense to each other in a chorus of mutually supportive, ignorant
> bliss. But overlaid on a system in which Webminds achieve happiness by
> creating patterns and satisfying users, and pay each other for intelligent
> answers to their questions, compassion enhances emergent intelligence. This
> hasn't been proven in practice yet, since we have not yet built a large
> network of Webminds. But we've set up simulations that have borne out this
> intuition.
Based on which models?

> So far, so good. But what happens when someone introduces a
> non-compassionate Webmind (or another non-compassionate intelligent actor)
> into the mix? A whole system of selfish Webminds works worse than a whole
> system of compassionate Webminds. But is global compassion a stable

Why? Rational selfishness will eventually give raise to emergence of
progressively more benign cooperative strategies.

> situation? One selfish Webmind, in a compassionate community, will have an
> intrinsic advantage - it will in effect be able to make itself king. More
> and more selfish Webminds will then get introduced into the system, as
> others see the value of selfishness for achieving their goals. The
> compassionate society will dissolve.
> What's the solution? One answer is benevolent fascism. Erect a global
> authority, which makes sure that only compassionate Webminds get released
> into the Net. But this will never work. The Net is too disorganized and
> self-organized; no one owns it.
More computer scientist's armchair philosophy.

> The only other answer that I see is, painfully enough, social ostracism.

Duh, what deep insight. And it's is quite telling when he calls it
"painfully enough".

> Compassionate Webminds need to take a "tough love" approach to selfish
> Webminds, and refuse to deal with them, even if it would be to their
> short-term economic advantage to do so. It then becomes a bad strategy for
> a single Webmind to be selfish. This seems simple enough. But the problem
> is, how do you recognize selfishness, from the outside? It's not so easy.

Seems he hasn't read "Rules of Encounter" yet.

> This is just another tough pattern recognition problem. Seeing examples of
> selfishness, and knowing some properties of selfishness, Webmind can learn
> to recognize selfishness by certain signs. But then, Webminds will get a
> hang of the "selfishness recognition systems" of other Webminds, and learn
> how to fool each other. Just as humans trick each other by false facial
> expressions and tones of voice. And furthermore, there will be Webminds
> that are perfectly compassionate, but that unintentionally give the signs of
> being selfish - "false negatives" for the selfishness recognition systems of
> their fellow Webminds.

He forgot to talk about ships and sealing wax.

> You have to act right to be accepted. If you don't act right, nobody wants
> to talk to you. Some of the ways of acting "wrong" may actually better than
> the accepted ways of doing things, but no one seems to recognize this. You
> either have to go along with the majority, accept your isolation, or band
> together with similar freaks who go against the prevailing standard of what
> is the correct way to be. This may sound familiar to many readers - it is
> definitely familiar to me, from my teenage years, particularly the five
> miserable years I spent in middle school and high school, before leaving for
> college. Unfortunately, it seems that a certain amount of this stuff is
> going to be there in Webmind communities as well. Not all of the nastiness
> of human society can be avoided, some of it is an inevitable consequence of
> the information-processing restrictions imposed by the finitude of mind. We
> can't tell what's really good or not, so we have to estimate, and our
> estimation errors may be painful for their victims.
He sounds genuinely screwed up. Why must the outcasts always try to
improve the world?

> And what happens when a band of freaks, going against the prevailing
> standards of right, gets large enough? It becomes an alternative community.
> You then have two groups, each one of which judges goodness according to its
> own criteria, its own estimates. Each one may judge the other one as bad.
> And - maybe - try and wipe the other one out, in the name of goodness?

Whee! Here we go. Let me wipe thee out, in the name of goodness. You
can't possibly object, since it's in your own best. So, there.

> Will things go this far in Webmind society? Will warfare erupt among
> Webminds, based on differing groups that use different pattern recognition
> algorithms to estimate goodness? Actually I doubt it. The saving grace of

He doubts it, now I feel reassured.

> digital intelligence, I believe, will be its adaptability. Webminds can
> change much more rapidly than humans. Potentially, they can even revise
> their brains. Right now this is well beyond any existing software, but in a
> decade or so, we may have Webminds that can rewrite their own Java code to
> improve functionality.
In a totally controlled fashion, eh. Routing yourself round that pesky
undecidedability. Dream on.
> I don't think there is much relation between the goodness of a society and
> the intelligence of the actors who make it up. Yes, more intelligent actors
> can figure out what features indicate goodness better. On the other hand,
> they can also figure out how to fool each other better. The two factors

Only smart players can progress to progressively more benign
cooperation strategies. The more primitive, the more red in tooth and

> probably balance out. On the other hand, I do think that adaptability
> encourages goodness. A fair amount of the stupidity of human society can be
> traced to our slow adaptation, in particular to the inability of our brains
> to respond to cultural changes.
Sure, but what makes him think that the virtual ecology as a whole
will be moving on to more intelligent players? If you look into,
admittedly in transit, biological ecology, humans are by the metric
ton and certainly not by the numbers of individuals the kings of the

> We humans are to a great extent locked in by our evolutionary history.
> There are hundreds of examples of this - one is the way that women's sexual
> infidelity is treated much more seriously than men's, in all human cultures.
> Many women find this unfair, and I would too in their place, but the reason
> is obvious, if one takes a sociobiological, DNA-centric view. If a woman
> has a baby by a different man than her husband, then the husband, insofar as
> he is supporting and protecting the child, is wasting his time propagating
> someone else's DNA. His DNA is angry: it wants to propagate itself. On the
> other hand, if a man impregnates a different woman than his wife, this doesn
> 't matter much to the wife's DNA. All her DNA wants is for the husband to
> keep supporting her children, which carry it into the future. So the extra
> stigma attached to female infidelity makes sense from an evolutionary
> perspective. But from a modern human perspective, it is almost completely
> obsolete. Now, women can use birth control, hence they can sleep around

Agreed here. But you'll have a hard chance fighting with your
hardwired behaviour, even if you're aware of what is going on.

> without much risk of pregnancy. Also, most women are no longer producing
> children on a continual basis, so that most acts of infidelity do not
> produce any question of paternal identity. Finally, we have DNA testing, so
> that, in principle, every new father can test his child's DNA to see if he's
> the real father or not, thus eliminating the risk of his DNA wasting much of
> its effort propagating a competing DNA pattern. Have these developments

You know what? They're selling DNA home testing kits, and they're
going away like crazy. Somehow, daddies still seem to adhere to what
their DNA is telling them.

> decreased the stigma attached to female infidelity? Yes, a bit. Cheating
> women are no longer routinely killed. We are not completely pawns of our
> evolutionary heritage. But, they have not decreased it as much as they
> should have, and they probably never will. Our mechanisms for judging
> others are not very adaptive.
Calvin seems to have a useful bibliography on that: I personally
find Konner, Melvin. The Tangled Wing: Biological Constraints on the
Human Spirit. New York: Holt, Rinehart & Winston (1982) dated, but

> To take another example, Freud, in "Civilization and Its Discontents"
> (1984), argued that neurosis is a necessary consequence of civilization.
> His reason was that civilization requires us to check our primitive impulses
> toward violence, to restrict our behavior in biologically unnatural ways.

Untypically insightful of Freud.

> In the terms I am using here, what was good in the contest of tribal society
> is no longer good in modern society, and this causes problems. Webminds
> will not have much of this kind of problem: faced with the situation Freud
> describes, they would just rewire themselves to be less violent.

Great opportunity to slide off into Red Queen regime here.

> Webmind society will thus be very different from ours. Social codes and
> standards will change continually and rapidly. It is hard to imagine what
> it would be like to live in such a way - but it's not impossible. Because,

So saith the ant from his blade of grass, while observing the party of

> after all, social codes and standards are changing more rapidly every
> decade. Society has moved into fast-forward mode. Aboriginals dressed and
> acted the same way for sixty thousand years; now styles change every six
> months. The dynamism of internet intelligence and the dynamism of
> contemporary culture will intersect to give the global societal mind a
> colorful, vibrant, wild character that
> I could express in music or pictures much more easily than in words.
Not also in graphs and equations? A rather superficial form of expression.

> Many features derived from human sexuality will be missing from Webmind
> society, since the types of reproduction available to Webminds will be much
> more diverse: a Webmind can clone itself, or can "cross over" with any
> number of other Webminds, yielding Webminds with 2, 3, or 100,000 parents.

It is not important with how many you cross over, as long as you cross
over (yeah, and of course you have to limit the diversity with whom
you're crossing over, orelse you just wound up with meaningless
garbage in place of your genome). And if you think finding a partner
today is hard, try finding 10 suitable simultaneouly.

> Furthermore a Webmind can be progressively altered by itself or its owner,

Self neurosurgery, the path to self improvement. No way to evaluate
this without external authority (former self, or a number of others).

> yielding a continuous evolution of personality that is not accessible to
> humans at all, due to our inability to modify our own brain structure except
> crudely through drugs. But even with this new diversity, much of the

Of course, this is immaterial for uploaders and even biomedical nanotechnology.

> archetypal structure of human relationships will be there. We know, from
> our research with genetic algorithms, that sexual reproduction is much more
> efficient than asexual reproduction by fission or continuous development.
> So Webminds will reproduce sexually even though they have other options open
> to them. And genetic algorithm experiments show that multi-parent
> reproduction is not significantly more effective than two-parent
> reproduction. So many Webminds will have two parents, though there will be
> no difference between mom and dad.
There is no need for sexual dimorphism due no need for intracorporeal
gestation, but maybe sexual selection will cause two sexes to become
asymmetrical. This ought to be modelled.

> Webminds will be careful about whom they reproduce with. If a Webmind has
> access to certain resources, in which it wants to place one of its children,
> it will want to make this child as good a child as possible. Furthermore,
> once it has observed that it can produce a good child with another Webmind,
> it may want to maintain this relationship over time.

Of course, this assumes that a limited scope player knows what
global-scope fitness is, which is just wishful thinking.

> "Childhood" among Webminds will not necessarily mean the same thing as it
> does among humans. It is possible for two Webminds to mate and birth a
> fully-formed Webmind, ready for action. On the other hand, it may be very

Not if you're genome based, you'll have to express the phenotype
(unless you consider your bits and your state both phenotype and
genotype), which will take time, and the proper context. Just as with
human babies. Otherwise you don't know what a piece of genome is
coding for.

> useful for a Webmind to create a "baby Webmind", with a partially empty
> brain. In this way it may arrive at something much smarter than itself, or
> at least something with new and different ideas. A baby Webmind, however,

No other way to know whether you're in a local minimum without trying
to break out. Casualties calculated.

> will require a teacher. The notion of parental responsibility arises.
> Webminds that take good care of their babies will be more likely to produce
> successful babies. Thus, by evolutionary pressure, Webminds will come to
> have an "instinct" that taking care of baby Webminds is good. The urge to
> take care of baby Webminds will be automatically passed along from parent to
> child..
Sounds sensible.

> And so it goes. The community of Webminds will not be exactly like human
> society - far from it. But it will not be entirely different either. The
> Jungian archetypes of union, child, family, will all be there, overlaid with
> other archetypes that we can barely even envision, all improvising on the
> theme of the basic numerical archetypes, the combinations of 0's and 1's

Could as well use ternary logic, or whatever discrete coding the
hardware layer suggests. Biology did came up with four bases, not two,
because the chemistry allowed for sufficient distinguishable
variability in a given molecular framework. Similiar applies to
electronically excited states of your molecular switching matrix.

> that make up the mind and the world. The human collective unconscious will

"Collective unconscious"? Now he's raving again.

> be made concrete via the action of intelligent actors on human text and on
> numerical data representing human activities. But it will be made non-human

Hogwash. Why should the things be confined to the network? They will
represent whatever is necessary, including self, others and
environment, whether inside our outside. You don't live long is
someone can step up, stick out the tongue and then pull your plug.

> via the intrinsic peculiarities of these intelligent actors and their
> interactions. Their own unconscious patterns will filter down into human
> society, so that we are affected in subtle ways by the feeling a digital

Assuming, there is still a human society at this stage, of
course. Which seems like going out on a limb.

> actor gets when it has 1000 parents, as opposed to 1 or 2. Much of each
> human being's brain will be filled with patterns and ideas of digital
> origin, just as much of the intelligent Internet will be filled with
> patterns and ideas of human origin. All this is bound to occur as a

As long as they don't fall into the positive autofeedback
self-improvement loop and become utterly incomprehensible, I'm all for

> consequence of our incessant daily interaction with the Net, and the Net's
> increasing self-organizing intelligence.

My, The Church of Webminds. Perhaps he should sell the concept to the Bahai.

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT