Re: AI over the Internet (was Re: making microsingularities)

Date: Sun May 27 2001 - 18:47:46 MDT

James Rogers, <>, writes:
> I am not talking about problems that are merely
> difficult or beyond current human ability, I am talking about concepts that
> you can sit down with a pencil and paper, using well-understood math, and
> prove the impossibility/stupidity of their implementation. I have not seen
> anything offered that explains how so many limits fundamental to the
> mathematics of the problem can be blithely ignored by a sufficiently
> intelligent AI.

Eliezer S. Yudkowsky, <>, writes:
> I'm sure you can do a mathematical proof that the Internet is
> dumbfoundingly slow if you want to maintain S=1, as in the human brain,
> where the characteristic communication delay time from an element to any
> other element is on the order of the a single tick of a computing
> element's clock speed. But all that analysis proves is that cognition
> distributed over the Internet has very high S and that local packets of
> mind are isolated for millions, billions, or even trillions of ticks.
> However, you can say nothing at all about how much of an inefficiency this
> represents at the *cognitive* level.

I think the only correct statement we can make at this point is that
we don't know whether it would be possible to do human- or superhuman-
level AI using an architecture like the Internet. Perhaps current
approaches to AI don't parallelize very well, but of course they don't
come anywhere near to human level AI either.

James is right that it seems that only a small fraction of problems can
exploit the kind of parallelism found on the net, but Eliezer is right
that we don't know whether it would be possible to design a cognitive
architecture which could run efficiently there.

There does seem to be a stumbling block if the AI needs the net's
resources to become superhuman, but it can't exploit them until it
achieves superhuman competence sufficient to redesign itself to run on
such a constrained system as the net. But Eliezer points out that it
does not need superhuman competence in every arena, only in software
architecture design, and perhaps an idiot savant super-coder AI can make
the leap.


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:08 MDT