From: Hal Finney (hal@finney.org)
Date: Wed Apr 09 2003 - 13:24:29 MDT
I think that much of the information floating around about Trusted
Computing technology is incorrect or misleading. I first learned about
this stuff last year when some of the early alarmist reports came out.
I studied the TCPA spec, bought a book on TCPA, and I've learned what
I could about Microsoft's Palladium, although there has been less
information about that.
Let me explain what is Trusted Computing (TC) and basically how it works.
The idea is to set up a computer so that a third party not sitting in
front of it can trust how it will behave. This does not mean that the
third party owns the computer or has complete control over it. It means
that he can become convinced that the computer is going to behave in a
predictable way. That means in practice that he can be convinced about
what program it is going to run, and that the program has a certain
immunity to having its code or data inspected or changed.
Let's talk about uploads for a moment before I go into detail about TC.
Suppose you were an upload, and were about to transfer your program to
a remote system. You'd be putting yourself at the mercy of the owner
of that system, wouldn't you? Suppose he turned out to be malicious,
and wanted to inflict pain, or turn you into a slave. How could you
protect yourself against that?
TC is how. You'd need to gain assurance that the remote computer was
running a standard, well-understood and well-behaved program that would
receive your data and run it unmolested. You would need to *trust* the
remote computer to behave in a *specified manner*. That is precisely
the mission statement for Trusted Computing!
Now I'll say more about how TC works. As I said, the goal is to be able
to assure a remote system that a particular program is running and that
the program can protect itself and its data.
The way this is achieved is via some secure hardware associated with the
computer, which has a few functions. One is that the hardware can take a
"fingerprint" (a cryptographic hash) of software which will use the secure
functionality. This fingerprint can be used in a couple of ways. First,
it can be reported via a cryptographic signature to remote machines.
This is what allows the remote machines to be convinced about what program
is running - because they learn the digital fingerprint of the software.
And second, this fingerprint can be used to "lock" (encrypt) data, such
that the data can only be decryped by the same program running with
the same fingerprint. No other program, or no altered version of this
program, can decrypt the data.
Two points follow from this, which I will explain further in a moment.
The first is that this suffices to achieve many of the goals of TC,
including protection of sensitive content, digital rights management, and
more immunity to viruses. And the second is that it is *not* necessary
for Microsoft or anyone else to approve or limit the programs which can
run in this trusted mode. Microsoft cannot shut down programs or delete
files off of trusted computers. There is no need for them to have this
ability in order to achieve the goals of trusted computing.
The way in which this functionality achieves the goals of TC is by
letting third parties know what program is running. Keep in mind that
this doesn't mean they can snoop on your system arbitrarily; rather,
your programs can prove to remote servers what is running, by asking
the secure hardware to send a cryptographically signed message that
describes the fingerprint of the running program. This allows, for
example, a content server to only download content to programs that it
trusts to handle the content reliably. It allows a government security
agency to similarly require that, say, email is only exchanged between
secure email programs, because each one can check what program is running
at the other side and refuse to connect to it.
Another app might be for an online game, the server can make sure that
each user is running a "legal" game client and not one that cheats.
Auction services could use similar technology. Likewise for SETI@Home
and similar semi-competitive distributed computing efforts, which have
been plagued by cheaters. This could also be a foundation for more
commercial forms of distributed computing, where sensitive calculations
could be farmed out to end-user computers, and the distributors would
have greater assurance that users couldn't get access to the data that
they were being paid to process, or falsely claim that they were owed
for work they hadn't done.
As far as viruses, the idea is that once a program has locked some data,
if a virus comes along and infects another program, it won't be able to
get access to the data. Even if it infects the same program that locked
it, that will change its fingerprint and so not even that program will
be able to get at the data. This won't stop viruses but it could limit
the damage they could cause.
For the second point above, the claim that only Microsoft-approved code
will run, note that all that is necessary is for the TC hardware to be
able to take a hash of the program, and to report it. Programs that use
this functionality do not have to be signed by Microsoft or anyone else.
Even though we may call these "trusted" programs, they don't have any
special powers. The sense in which they are "trusted" is that they can
get their hashes reported elsewhere, so that those people can decide
whether to trust them.
There is no central party who decides which programs are trusted. Rather,
each application area, even each individual user, would ultimately decide
whether to trust a program for a particular purpose. And these judgements
are made with respect to programs running on *other people's* computers.
I would get to decide whether I want to trust a program running on your
computer for some purpose, and vice versa. Sony would get to decide
whether to trust a program running on your computer for downloading
its music catalog. Maxis would get to decide whether to trust the Sims
software running on your computer for connecting to its game server.
Each computer makes its own decision about who to trust. It's not
Microsoft, for indeed, the potential scope of this technology is so
large that it would hardly be possible for Microsoft to decide for each
program whether or not it was in some sense "trustworthy". They want this
to be used in a big way. Having to do a code review for each program
would increase their costs enormously, would open them up to liability,
and is completely unnecessary in order to achieve the properties of
Trusted Computing.
Based on my understanding of the technology, there is no need to fear
that uploads running on a Palladium system would imply that Microsoft
"owned" all the people and could kill them at any time. The ability
to take a secure fingerprint of a program does not imply the ability to
kill the program! TC as I understand it is a technology for information
protection, not for destruction. It allows applications to extend trust
to remote systems, and to save their data immune from molestation by
other programs.
Obviously the story I have told here is very much at odds with what
you will have heard about TCPA/Palladium/NGSCB elsewhere on the net.
I can't really account for that discrepancy. I don't understand why my
reading of the technology's properties and capabilities is so different
from everyone else's. It's possible that there are non-public documents
which paint a much more sinister picture. All I can say is that based
on the public information, TC works as I have described it here.
Hal
This archive was generated by hypermail 2.1.5 : Wed Apr 09 2003 - 13:35:15 MDT