Re: Fwd: Earthweb from transadmin

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Sep 23 2000 - 10:05:43 MDT


Samantha Atkins wrote:
>
> You are right that human goals are not uniformly friendly to human
> beings. But I would tend to agree with the POV that an intelligence
> build on or linking human intelligence and automating their interaction,
> sharing of knowledge, finding each other and so on would be more likely
> to at least majorly empathize with and understand human beings.

Samantha, you have some ideas about empathy that are flat wrong. I really
don't know how else to say it. You grew up on a human planet and you got some
weird ideas.

The *programmer* has to think like the AI to empathize with the AI. The
converse isn't true. Different cognitive architectures.

> Why would an Earthweb use only humans as intelligent components? The
> web could have a lot of non-human agents and logic processors and other
> specialized gear. Some decisions, particularly high speed ones and ones
> requiring major logic crunching, might increasingly not be made
> explicitly by human beings.

Then either you have an independent superintelligence, or you have a process
built from human components. No autonomic process, even one requiring "major
logic crunching", qualifies as "intelligence" for these purposes. A thought
requires a unified high-bandwidth brain in order to exist. You cannot have a
thought spread across multiple brains, not if those brains are separated by
the barriers of speech.

Remember, the default rule for "folk cognitive science" is that you see only
thoughts and the interactions of thoughts. We don't have built-in perceptions
for the neural source code, the sensory modalities, or the contents of the
concept level. And if all you see of the thoughts is the verbal traceback,
then you might think that the Earthweb was thinking. But three-quarters of
the complexity of thought is in the underlying substrate, and that substrate
can't emerge accidentally - it doesn't show up even in "major logic
crunching".

> With the AI you only know what goes in at the beginning. You have no
> idea what things will develop from that core that you can have full
> confidence in.

There is no "full confidence" here. Period.

That said, the Earthweb can't reach superintelligence. If the Earthweb
*could* reach superintelligence than I would seriously have a harder time
visualizing the Earthweb-process than I would with seed AI. Just because the
Earthweb has human components doesn't make the system behavior automatically
understandable.

> > I shouldn't really be debating this. You can't see, intuitively, that a
> > superintelligence is solid and the Earthweb is air and shadow by comparision?
> > The Earthweb is pretty hot by human standards, but only by human standards.
> > It has no greater stability than the human species itself. It may arguably be
> > useful on the way to a solid attractor like the Sysop Scenario, but it can't
> > possibly be anything more than a way-station.
>
> If we could see it intuitively I would guess none of us would be
> questioning it.

Not true. Many times, people perceive something intuitively without having a
name for that perception, so they forget about it and start playing with
words. I was hoping that was the case here... apparently not.

> Frankly I don't know how you can with a straight face
> say that a superintelligence is solid when all you have is a lot of
> theory and your own basically good intentions and optimism. That isn't
> very solid in the development world I inhabit.

Wrong meaning of the term "solid", sorry for using something ambiguous. Not
"solid" as in "easy to develop". (We're assuming that it's been developed and
discussing the status afterwards.) "Solid" as in "internally stable".
Superintelligence is one of the solid attractors for a planetary technological
civilization. The other solid attractor is completely destroying all the life
on the planet (if a single bacterium is left, it can evolve again, so that's
not a stable state).

Now, is the Earthweb solid? In that million-year sense?

> I hear you but I'm not quite ready to give up on the human race at as a
> bad job

Do you understand that your sentimentality, backed up with sufficient power,
could easily kill you and could as easily kill off the human species?

> Your own basic optimism would seem to indicate that as
> intelligence goes up the ability to see moral/ethical things more
> cleanly and come to "friendlier" decisions goes up.

Yep. The key word in that sentence was "ability", by the way. Not
"tendency", "ability". That was my mistake back in the early days (1996 or
so).

> I think it behooves us to try all approaches we can think of that might
> remotely help. I'm not ready to put all my eggs in the AI basket.

Try everything, I agree. I happen to think that the AI is the most important
thing and the most likely to win. I'm not afraid to prioritize my eggs.

> So instead you think a small huddle of really bright people will solve
> or make all the problems of the ages moot by creating in effect a
> Super-being, or at least its Seed. How is this different from the old
> "my nerds are smarter than your nerds and we will win" sort of
> mentality?

You're asking the wrong question. The question is whether we can win.
Mentalities are ever-flexible and can be altered; if we need a particular
mentality to win, that issue isn't entirely decoupled from strategy but it
doesn't dictate strategy either.

> By what right will you withhold behind close doors all the
> tech leading up to your success that could be used in so many fruitful
> ways on the outside? If you at least stayed open you would enable the
> tech to be used on other attempts to make the coming transitions
> smoother. What makes you think that you and your nerds are so good that
> you will see everything and don't need the eyeballs of other nerds
> outside your cabal at all? This, imho, is much more arrogant than
> creating the Seed itself.

The first reason I turned away from open source is that I started thinking
about the ways that the interim development stages of seed AI could be
misused. Not "misused", actually, so much as the miscellaneous economic
fallout. If we had to live on this planet for fifty years I'd say to heck
with it, humanity will adjust - but as it is, that whole scenario can be
avoided, especially since I'm no longer sure an AI industry would help that
much on building the nonindustrial AI.

So far, there are few enough people who demonstrate that they have understood
the pattern of "Coding a Transhuman AI". I have yet to witness a single
person go on and extend the pattern to areas I have not yet covered. I'm
sorry, and I dearly wish it was otherwise, but that's the way things are.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:44 MDT