Re: Fwd: Earthweb from transadmin

From: Samantha Atkins (samantha@objectent.com)
Date: Sat Sep 23 2000 - 03:13:39 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Alex Future Bokov wrote:
> >
> > Um, yes, that is the original debate, and the reason this thread has
> > EarthWeb in its name. To recap, Eli and I agree that the capabilities
> > of an AI as he envisions it would be a superset of the capabilities of
> > an EarthWeb like entity as I envision it. However, I'm arguing that an
> > EarthWeb is more easily acchievable, by definition friendly to human
> > goals, and possibly sufficient for preventing runaway techno-disaster.
>
> On the contrary; the Earthweb is only as friendly as it is smart. I really
> don't see why the Earthweb would be benevolent. Benevolent some of the time,
> yes, but all of the time? The Earthweb is perfectly capable of making
> mistakes that are outright stupid, especially if it's an emotionally charged
> issue being considered for the first time.
>

You are right that human goals are not uniformly friendly to human
beings. But I would tend to agree with the POV that an intelligence
build on or linking human intelligence and automating their interaction,
sharing of knowledge, finding each other and so on would be more likely
to at least majorly empathize with and understand human beings.

> I have no confidence in any decision-making process that uses humans as the
> sole intelligent components. Period. I know what goes into humans, and it
> isn't sugar and spice and everything nice. I can have some confidence that a
> well-built Sysop is far too smart to make certain mistakes; I have no such
> confidence in humans. Because we have no choice - because even inaction is
> still a decision - we must rely on human intelligence during the run-up to
> Singularity. But I will not bet the planet on humans being able to
> intelligently wield the powers that become available beyond that point.
>

Why would an Earthweb use only humans as intelligent components? The
web could have a lot of non-human agents and logic processors and other
specialized gear. Some decisions, particularly high speed ones and ones
requiring major logic crunching, might increasingly not be made
explicitly by human beings.

With the AI you only know what goes in at the beginning. You have no
idea what things will develop from that core that you can have full
confidence in. The things that make up the core, the seed, are initial
values but we frankly do not know what types of equations we are dealing
with or how chaotic they are or much about what attractors are
likely.

 
> I shouldn't really be debating this. You can't see, intuitively, that a
> superintelligence is solid and the Earthweb is air and shadow by comparision?
> The Earthweb is pretty hot by human standards, but only by human standards.
> It has no greater stability than the human species itself. It may arguably be
> useful on the way to a solid attractor like the Sysop Scenario, but it can't
> possibly be anything more than a way-station.
>

If we could see it intuitively I would guess none of us would be
questioning it. Frankly I don't know how you can with a straight face
say that a superintelligence is solid when all you have is a lot of
theory and your own basically good intentions and optimism. That isn't
very solid in the development world I inhabit. You could be largely
correct. But it isn't obvious that you are.

> An arguable argument would be that the Earthweb would be better capable of
> handling the run-up to a superintelligence scenario, perhaps by holding the
> planet together long enough to get genuine enhanced humans into play. I think
> you'd still be wrong. I think that realistically speaking, the construction
> of the first versions of the Earthweb would be followed by six billion people
> shouting their prejudices at top volume. And yes, I've read _Earthweb_ and
> Robin Hanson's paper on idea futures and I still think so.
>

I hear you but I'm not quite ready to give up on the human race at as a
bad job and just go for building an AI that will do everything right so
our basically hopeless brokeness doesn't end up killing us all.
EarthWeb is one vision along the way that is a bit more human and has
steps a bit closer to where we are and might, just conceivably, provide
a framework for beginning to get beyond just shouting.

Personally it seems far more likely to me that humans begin to
self-enhance and learn a bit more about living together peaceably and
more productively and that that brings us through current and looming
crises. Your own basic optimism would seem to indicate that as
intelligence goes up the ability to see moral/ethical things more
cleanly and come to "friendlier" decisions goes up. Or do you believe
that the very evolution that enabled our intelligence also embedded such
deep and intractable tendencies that we can never learn this and act on
it enough? Do you believe we can of ourselves do nothing but fight and
squabble even in the midst of great abundance and opportunity?

If so then exactly how will the AI being present as the "Sysop" make our
miserableness any less miserable and onery? It might keep us from
killing each other but your scenario does not show me we would get any
wiser or saner. In fact the presence of the Protector might take away a
considerable amount of impetus to improve ourselves. It would simply
matter a lot less.
 
> . At the end of the day, the memetic baggage of a
> planet just has too much inertia to be altered in such a short amount of
> time. Back when I expected a Singularity in 2030, I was a lot more interested
> in Singularity memetics; as it is...
>

I think it behooves us to try all approaches we can think of that might
remotely help. I'm not ready to put all my eggs in the AI basket.

> This isn't a game; we have to win as fast as possible, as simply as possible,
> and not be distracted by big complex strategies. That's why I ditched the
> entire running-distributed-over-the-Internet plan and then ditched the
> open-source-AI-industry strategy; too many ways it could all go wrong. The
> Earthweb scenario, if you try to visualize all the details, is even worse.
>

So instead you think a small huddle of really bright people will solve
or make all the problems of the ages moot by creating in effect a
Super-being, or at least its Seed. How is this different from the old
"my nerds are smarter than your nerds and we will win" sort of
mentality? By what right will you withhold behind close doors all the
tech leading up to your success that could be used in so many fruitful
ways on the outside? If you at least stayed open you would enable the
tech to be used on other attempts to make the coming transitions
smoother. What makes you think that you and your nerds are so good that
you will see everything and don't need the eyeballs of other nerds
outside your cabal at all? This, imho, is much more arrogant than
creating the Seed itself.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:43 MDT