Alexander 'Sasha' Chislenko, <firstname.lastname@example.org>, writes:
How can you make a rule which only applies to "automatically"
> You don't prohibit things without warning. You tell people that since
> this creates problems, from now on it should be prohibited.
> If you poke me in the eye with a fork, which wasn't it's intended use,
> you go to jail - and there are no problems with it.
> There is a Robot Exclusion Protocol now that is followed by most
> spiders, and could be officially legalized.
> There can be a similar protocol for automatically collecting email
> addresses from the usenet postings and Web pages.
> If you tell in your page or posting in a standard way that you don't
> want this email to be collected by any program, this desire should
> be respected, just as your desire not to list your phone number, or
> not to open your door to strangers.
How can you make a rule which only applies to "automatically"collecting email addresses? Why should it be OK for me to do something with my own mind, but illegal if I amplify my mind's capabilities with a computer? If we continue to adopt rules based on this distinction, what will happen when people become able to augment their own minds?
Philosophically, it just doesn't make sense to forbid doing something automatically which is legal manually (especially if you are going to throw someone in jail for breaking this law). It is as bad as the distinction between doing things commercially versus personally.
> Mass unsolicited mail can be considered everything that is directed
This is still going to lead to a lot of gray areas. If a stranger posts
> to a large group (> 500?) people that the author had no previous
> contact with and/or who directly indicated that they don't want
> any unsolicited mail and/or the message is unrelated to the topic
> of the group's discussion (e.g., "Visit my porn site" message on
> comp.ai.alife) and/or have misleading titles (e.g. "thanks for
> last night" with the text "Buy bricks from Smith & Co".)
This is still going to lead to a lot of gray areas. If a stranger postsa message to the extropians list directing them to his for-profit web site, should he go to prison? Suppose his site is selling his book about transcendental meditation. Will his fate depend on whether this is sufficiently on-topic? Do we really want a world where people can go to prison for this?
Sending people to prison for having misleading titles on their message strikes me as a perfect example of what is wrong with this whole approach. You're looking at the spam you got yesterday and today, and saying, here's a rule which would make this one illegal; here's another rule which would make that one illegal; and you keep doing this until you've crossed them all off. This is a shallow approach to the problem, looking at the most superficial symptoms instead of the real issue.
> Such standards would be easily and unambiguously formulated and
> enforced in a variety of ways. I think it will be done at some
> point; the lack of interest in filtering software is that it's not
> compatible with many clients, doesn't work with many services
> (Web, Usenet, guestbooks, etc.) and takes money and effort to
> buy and use.
If there were really a demand for this kind of software, the market would supply it in both free and commercial versions. Your comments would apply to every piece of software. Email software takes money and effort to buy and use. Yet there is demand for it, and people do expend effort to install it. (Actually, some services, like hotmail, do offer spam filtering as one of their attractions.)
The point is, the whole approach of criminalizing this behavior is fundamentally wrong. There are many technical solutions, and once the problem becomes important enough, these will start to be used. Throwing people in jail for sending out information is not the way to go.