Alex Future Bokov wrote:
>
> Um, yes, that is the original debate, and the reason this thread has
> EarthWeb in its name. To recap, Eli and I agree that the capabilities
> of an AI as he envisions it would be a superset of the capabilities of
> an EarthWeb like entity as I envision it. However, I'm arguing that an
> EarthWeb is more easily acchievable, by definition friendly to human
> goals, and possibly sufficient for preventing runaway techno-disaster.
On the contrary; the Earthweb is only as friendly as it is smart. I really
don't see why the Earthweb would be benevolent. Benevolent some of the time,
yes, but all of the time? The Earthweb is perfectly capable of making
mistakes that are outright stupid, especially if it's an emotionally charged
issue being considered for the first time.
I have no confidence in any decision-making process that uses humans as the
sole intelligent components. Period. I know what goes into humans, and it
isn't sugar and spice and everything nice. I can have some confidence that a
well-built Sysop is far too smart to make certain mistakes; I have no such
confidence in humans. Because we have no choice - because even inaction is
still a decision - we must rely on human intelligence during the run-up to
Singularity. But I will not bet the planet on humans being able to
intelligently wield the powers that become available beyond that point.
I shouldn't really be debating this. You can't see, intuitively, that a
superintelligence is solid and the Earthweb is air and shadow by comparision?
The Earthweb is pretty hot by human standards, but only by human standards.
It has no greater stability than the human species itself. It may arguably be
useful on the way to a solid attractor like the Sysop Scenario, but it can't
possibly be anything more than a way-station.
An arguable argument would be that the Earthweb would be better capable of
handling the run-up to a superintelligence scenario, perhaps by holding the
planet together long enough to get genuine enhanced humans into play. I think
you'd still be wrong. I think that realistically speaking, the construction
of the first versions of the Earthweb would be followed by six billion people
shouting their prejudices at top volume. And yes, I've read _Earthweb_ and
Robin Hanson's paper on idea futures and I still think so.
Collaborative filtering is actually much more powerful than the technologies
depicted in _Earthweb_, and I think there's actually an excellent possibility
that collaborative filtering will show up before the Singularity and maybe
even play a major part in it. But I'm not relying on collaborative filtering
to swing public opinion in favor of the Singularity. If it happens, great,
but I can't rely on it. At the end of the day, the memetic baggage of a
planet just has too much inertia to be altered in such a short amount of
time. Back when I expected a Singularity in 2030, I was a lot more interested
in Singularity memetics; as it is...
This isn't a game; we have to win as fast as possible, as simply as possible,
and not be distracted by big complex strategies. That's why I ditched the
entire running-distributed-over-the-Internet plan and then ditched the
open-source-AI-industry strategy; too many ways it could all go wrong. The
Earthweb scenario, if you try to visualize all the details, is even worse.
> Eliezer then pointed out that the decisions of EarthWeb would not be as
> enforceable as those of an >AI sysop. I don't understand the argument;
> if humans/>AI/some combination design nanotech, and then a general
> purpose MatterOS to control it, why should enforcement of the operator's
> will be any more problematic just because the will is the consensus of
> human wills as represented by markets and reputations as opposed to being
> the will of a single >AI? Isn't the enforcement done by the MatterOS layer,
> regardless of who is holding the reins?
The MatterOS I visualize requires superintelligence; it is not literally an
OS. Furthermore, a MatterOS is too much power to be entrusted to any human or
group of humans, even (or especially) an entire planetary group of humans -
forget about power corrupting; even benevolent humans, including myself, are
just not that smart.
Also, if the MatterOS isn't superintelligent, then as far as I can tell
someone (e.g., me) writes a superintelligence and the superintelligence takes
over. Any MatterOS designed by beings without a codic cortex would
undoubtedly be easy enough to break - we can't even get the real-world Java
sandboxes right, and you want us to try it on a whole Solar System? The first
woodpecker that came along would wind up with root permissions.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:31 MDT