Brin on privacy

Lyle Burkhead (LYBRHED@delphi.com)
Sat, 21 Dec 1996 11:10:15 -0500 (EST)


In reply to my statement,

: In a sea of pseudorandom bits generated by a simple encryption
: program, a "seriously" encrypted message would stand out. Then
: they would know: here is somebody who has something to hide.
: The standard encryption isn't good enough for this guy -- so who is he?
: And then they would proceed to find out who he is.

Eugene Leitl writes,

> These arguments are not valid. Cryptography does not work this way.

What I'm trying to say is that when they recognize that someone is
using strong cryptography -- stronger than what most people
generally use -- they will use *other* methods to investigate and
find out what he's up to. Cryptography can be supplemented by
plain old detective work.

> It is quite impossible to tell random from pseudorandom,

Why is it impossible? If that's a theorem, I would like to see the proof.
Anyway the problem here is not to distinguish random from
pseudorandom. The problem is to distinguish among the many
different kinds of pseudorandomness.

> The only way to tell them apart would try a serious cryptoattack,
> which is a very costly business.

So it's costly. When the government feels seriously threatened,
they will spend astonomical amounts of money to defend themselves.
Cryptography is now classified as munitions, and the NSA may emerge
as a branch of the military, with the same standing as the Army, Navy,
and Air Force, and with a comparable budget.

>: but the NSA could see the difference.
>
> But not _casually_. That's the point.

Why is that the point? There is nothing casual about this. Recognizing
the various kinds of pseudorandom sequences produced by various
encryption schemes is not easy; neither is finding submarines in the
ocean. But neither problem is impossible. Suppose NSA has a budget
as big as the Navy's budget. Then a lot of things become possible.

It is true that universal encryption would make the NSA's task more
difficult. Instead of using chips that make a simple distinction between
low-entropy and high-entropy messages, they would have to
continually upgrade their chips to make more and more fine-grained
distinctions, and they would have to use other methods as well.

>: Let me rephrase my original statement: using more encryption than
>: other people commonly use is like painting your windows black when
>: everybody else uses curtains. It just attracts attention.
>
> everybody using cryptography equals to anybody having windows.
> It is just some of them are bulletproof. Really easy to tell, huh?

Yes, it is easy to tell.
1. Shoot a bullet at the window. If it breaks, it's not bulletproof.
2. Take pictures of all houses in the neighborhood with an
infra-red camera. Regular glass and bulletproof glass will show up
differently in the pictures.
3. Reflect a laser off all the windows in the neighborhood. Different
kinds of glass have different properties, and are identifiable.
4. Ask the contractors who build houses -- who uses bulletproof glass?
5. Ask the company that makes bulletproof glass -- who buys it?

Anyway this analogy is not valid. Everybody using cryptography
equals to everybody *covering* their windows. It's just that some use
more opaque coverings than others. If there is a standard covering
that most people use, and somebody uses a different, more opaque
covering, that person has called attention to himself.

Lyle