Re: Free riding on Gnutella white paper

From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Tue Nov 28 2000 - 13:17:01 MST


hal@finney.org writes:

> Gnutella's woes demonstrate that making a scalable, usable peer to
> peer file sharing system is not as easy as it looks. In retrospect it

Gnutella's woes demonstrate clearly is that if you take a broken
protocol written by an idiot (Napster) and try to make it
peer-to-peer/headless without any modifications it will break horribly
all over the place. Duh. Napster/Gnutella is really, really broken. It
totally ignores prior art addressing these obvious problems, and a
number of projects (I'm really looking forward to run MojoNation) did
so and do it much better.

Otoh, Napster created the user nucleus with usable criticality, and
thus amplified the meme all over the place. Subsequent efforts don't
have to explain themselves, just tap into existing infrastructure with
a (preferably, one-way) gateway, and declare "we're the better
Napster". As soon as the new service reaches criticality as enough
users install the clients, you break the (vulnerable, since traceable)
connecting gateways, and go fully autonomous.

> is amazing that Napster was able to do so well. Of course the other

IIRC, Gnutella's problem is that it uses a Napster protocol without
caching indexes (Napster runs over a central server, so it doesn't
have to deal with that query amplification problem). Each query gets
amplified to essentially the entire network. A major fraction of
people are 1) freeloader scum 2) run 56k modems 3) don't use
lightweight query coding, so packets are large. So each query got
amplified a lot, and eventually clogs the bandwidth of your modem
given a relatively modest size of nodes. Finis, the network fragments
in several subnetworks -- each of them are still operational, however.

An SDSL line gives you an order of magnitude larger bandwidth (and a
much smaller latency) than a 56k modem, and hence would not saturate
so easily. A rewrite, where each node caches the common index of next
few neighbour (few-hop) nodes (resulting into a bit of initial setup
time/initial travel spike before your peer will accept your queries)
would let the whole saturation problem disappear into thin air.

This simple measures (advent of xDSL and a protocol patch) do not
address the other pressing problems, however. Did I already mention
that Napster/Gnutella is b0rken?

> There are a dozen or more other P2P systems out there that I don't know
> anything about. This fragmentation is itself a problem because these
> systems have to reach critical mass to be useful.

Thanks for the mini overview. I think we can agree that the problems
currently plaguing P2P share are not fundamental, and are IMO already
adequately addressed in several existing and forthcoming packages,
albeit probably not all in a single one.

The demand is certainly there, and since steganography and remixing
will make the resulting network both unfilterable and untraceable, we
can probably also agree that you can't stop the emergence of global
P2P neither at technical nor at legal level. The only way to stop it
would be to rip out existing infrastructure, and substitute it with a
lawnik-mandated one, while also inserting multiple probes into user's
orifices. I think it is easy to see that this won't happen.

As soon as the beta code bugs are ironed out, any digitized or
digitazable IP for which there is a demand for in a negligeable
fraction of entire participants will be on the network. I'm not
worried about music or texts (street performer procol & Co), I'm
worried about movies, which are real expensive to make. (But I never
liked Hollywood, anyway. They may all burn in hell for what I'm
concerned).

I think the net result will be a win. But we'll find out soon enough,
anyway.



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:32 MDT