hal@finney.org writes:
> of documents, ranked by priority. When a document is fetched, its
> rank is increased and that of all others is lowered. When documents'
> rankings drop below a certain level, they are deleted from the node,
This kinda reminds me of Python's GC reference counter model. One
could also implement a (glacially slow) bitrot, where constantly
decrementing counters are being incremented by accesses (ok, in your
model the clock ticks whenever an access occur. if one node never sees
hits, objects in it can become infinitely stale). Objects evaporate
when counters fall below zero.
It would also be nice to know when the last instance of an object on
the network is about to face the great bitbucket in the sky. Maybe
some kind soul will grant it immortality in a bulk storage
archive. (Otoh, it will be probably called the Great Archive of Crap,
not even attractive to practising kibologists).
> This produces a sort of crude "economy", where document requests are
> treated like payment to keep documents present in the system. Documents
> which are requested more often get higher rankings and get replicated on
> multiple servers. Documents which are requested too rarely eventually
> get dropped from the system.
If creating content has a cost, one could keep all of them. Available
storage is expanding exponentially.
> It's a highly unbalanced and fragile system which is unlikely to achieve
> much robustness.
>
> However it could still be a good testbed for the basic mechanisms for
> storing documents, moving them around, and so on. If the ranking system
> were updated to be more reasonable (based on real payments, for example)
> then it could grow into a more stable system.
It is really unfortunate that all the usable algorithms for digicash
have a lock on them, in legalspace. Someone oughta rescue all these
poor bits, and bring them into circulation.
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:04:29 MDT