internet++, predecessor of >web (was Re: Disasterbation)

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Thu, 19 Dec 1996 15:06:25 +0100 (MET)


On Wed, 18 Dec 1996 DanHook80@aol.com wrote:

> In a message dated 96-12-17 05:53:41 EST, you write:
>
> << The idea of the Internet breaking down, due to an overload of messages,
> was
> brought up here. I began wondering: if the amount of information the
> Internet can handle at any given time is limited, then won't the price of
> sending a given amount of information through the Internet go up to match
> the
> demand for the limited resources? I don't see how an overload could happen
> if
> as the overload threshold was being approached, the price to send messages
> kept rising, thus cutting down on the length of messages people send, since
> it
> will cost more to send a longer message. Is my theory correct? >>
>
> It would seem the same thing should work for electrical power but it does
> not. There are still brown outs due to overuse of power. The price cannot
> change fast enough.
>
> To apply this specifically to the e-mail situation you have to think about
> how much it costs you to send every byte of e-mail. If you're like me, than
> you are paying a set price per month. I could have my computer 24 hours a
> day spewing out snowballs left and right and it would not cost me a penny
> more than my monthly fee. Now, if thousands, or a million people did that,
> the net might crash, although I doubt it.

We need agoric computing, with ubiquitous WSI nodes (dwelling behind the
walls, linked with cheap thermoplastic fiber optics (attenuation much
higher than glass, but sufficient for short-range links), orthogonal
hypergrid connectivity (redundant links, wiring density decreasing with
distance, though fuzzy would do binary offsets into ID space are best:
catches runtime defects, performs well even in highly defective grids,
derives ID from wiring constraints, purely local-knowledge routing),
lightweight routing (grassrouting)). The infrastructure is installed/owned
by you, so you charge for the routing/computational resources (nanoOS must
have nanocash capability) used by transient packets/agents (all objects).

You are both a provider, and a consumer, striving to achieve a balanced
account. If you charge too much (infobahn waylayer?), packets will route
by you, so there is a economic pressure for the hardware to become
cheaper/prices to go down.

If you own a number of meshed nodes, there must be a basic assymmetry
(hardware-supported) catching attempts of exoperversion: a movable
"firewall" (distinction between nodes owned by you/nodes transiently
owned by others).

If you want to reclaim the resources (say, you'd like to watch webTV,
and need additional horspower for video decrunching), you send all
objects a flush message, then reboot the node from scratch, attaching them
to those currently used by you, load leveler then taking care of the rest.

&c&c.

(I hope above minirant made sense).

ciao,
'gene

P.S. BeBox now runs Linux (SMP?). A nanokernel with Linux personality
(not Mach) have been developed in Dresden.

> Dan Hook