Managing Mail,

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Sat, 17 May 1997 18:22:54 +0200 (MET DST)


or, Keeping Abreast of the ongoing Information Tsunami

I am currently confronted with a common problem,
namely: how to manage one's mail. As soon similiar
trouble will probably plague other users, an ad
hoc analysis seems to be in order. Feel free to
dissect this to a bloody rag.

First, I am overwhelmed by sheer traffic volume.
Chronically infoglut infected (EPrime=me) tend to
homeostate their load up to the tolerance level
(and sometimes beyond), but the personal
infoprocessing resources _do_ fluctuate over
time. Should I become suddenly unable to access
my mail for several days (as happened recently
when my (notoriously unreliable, since academic)
ISP was down), processing the backlog can be
demanding, and scanning a month worth of mail is
beyond human (well, at least well beyond my current)
mental (fuzzy-brained) and bodily (chronically
bleary-eyed, carpal tunnel syndrome/RSI ridden
;) capabilities.

When in transit, accessing one's mail via
a cellular link and a laptop seems also a good
idea.

The most straightforward solution would be to be
able to cut off the bulk of the information stream,
but the most vital data by one invocation. The
simplest implementation for this would be to utilize
(at least) two email addresses, one for important,
low-traffic personal mail, and one for publicly
visible bulk traffic. Furthermore, a means to
(un)subscribe to one's mailing lists in one
step (or the reply-less equivalent of the
*NIX vacation program) seems to be useful.

(At least) 95% of everything is trash. So is
incoming mail. About 1-5% of everything I retain
for possible future reference. The simplest way to
reduce the percentile of trash (so I can subscribe
to even more mailing lists to max out on traffic)
is to block out certain posters (particularly certain
kooks, who generate many a wealth of redundant
posts) -- the classical killfile. I guess it is
not too difficult to implement a personalized
killfile filter via a perl script, which I intend
to do pretty soon, once the Linux box is done (btw
Linux, there is much ado hereabouts, as the (press)
mainstream has suddenly became aware of it). This
is also a first, albeit weak protection dam against
unsolicited mail, aka spam, since one might mark
certain domains for killfile targets with a single
(space-cadet) keystroke. I am not sure forged
mail headers are a good means for dealing with bot
scans, though I might be forced to accept this
easy solution, as absolute spam load has become
pretty intolerable on some days. I still think
that, apart from banishing mailto: tags from one's
web site (never came around to do this), this must
be handled by mailing list majordomo sofware,
which must substitute web-accessible archives
with local-visible-only mailing list users
handles, and which additionally limits access
to mailing lists by trivial (mail-back ack with a
one-time random token) or sophisticated (grant
of full membership by invoking majority decisions
of the inner circle, after a thorough scan of
applyers mail history) mechanisms.

Other thing plaguing me is inability to access
important, stored mail. Years worth of compressed,
'gene-filtered traffic lie around on several machines,
virtually inaccessible. A mail database, and full-text
index are a must, I guess. As I am determined to use
(x)emacs for a generic tool, and it can browse HTML,
I think using mSQL and w3-msql might be not a bad
choice. (I'd be thankful for pointers if anybody is
aware of a better PD dbase with a web interface).

Lacking truly smart filters (whatever happened to
Cyc?), we must still rely on human agents. I am
sure informal users groups forwarding user-prescanned
information are pretty common, what we need is to
formalize this approach. We must approve a panel
of high-selectivity web/usenet/mailing list
scanners, who feed their results into dedicated
channels. Thanks, Damien (cum shillelagh) for the
idea.

I volunteer for a scanner. Anybody else?

ciao,
'gene