On 5/27/01 2:20 PM, "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
> For a bandwidth-slow system, you split off a lot of independent packages
> and rendezvous with them a second, minute, or hour later. You don't get
> to increase the size of your largest high-speed thought, but you can have
> large but very slow thoughts, or a huge number of small and fast
> thoughts. Distributing a human brain over the Internet would slow it
> down. A seed AI distributing verself over the Internet would need to
> change the character of vis cognition, but after that could take full
> advantage of computing power.
For many types of codes and algorithms, there is no transformation that will
allow it to be usable on a true loosely distributed system like the
Internet. For "usable" I refer you to "scalability, integrity, stability --
pick any two". (There actually are relevant theorems and proofs for this,
but this version makes for much shorter reading.) Notice that there are
some types of applications and algorithms that are conspicuously absent on
distributed systems that could certainly use more performance and for which
a huge market already exists. This is not an accident; for some
applications, integrity and stability are not disposable.
Assuming that integrity and stability are essential for building an AI, we
are essentially painted into a corner with respect to how well an AI will
scale on a piece of hardware such as the Internet. The fact that the
Internet is composed of slow high-latency links just aggravates this
situation by effectively consuming the relatively small bit of scalability
allowed with almost no payoff to the AI.
But yes, in theory you could have a slowly -- *VERY* slowly -- growing AI
that operated over the Internet. However, I would expect it to be
surpassed, probably unnoticed, by an AI running on systems that have been
better engineered for the task. Note that the differences in scalability
for these different types of implementations is in terms of orders of
magnitude. A task that might take a basement cluster AI a day to do on
modern hardware may take years for one distributed over the Internet.
> However, you can say nothing at all about how much of an inefficiency this
> represents at the *cognitive* level.
I'm not sure what this means. The inefficiency of using the Internet as a
distributed system could very well mean that a seed AI never reaches
critical mass intelligence-wise. This represents a real hard limit, for now
at least. Basically, you have to look at the problem as determining the
maximum throughput of a given algorithm (AI in this case) on a given piece
of hardware (A large, loosely distributed system in this case). You can
treat a distributed system exactly like a single piece of hardware because
mathematically there is no difference. Plug in the numbers and turn the
crank. Everything I've seen appears to suggest that, even if you did send
an AI out over the Internet, you could figuratively kick its ass with a
well-designed basement cluster by any useful metric.
This is a useful discussion in any case. I don't recall ever seeing it come
up on the extropians list since I've been on it.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:08 MDT