On Wed, Feb 09, 2000 at 05:05:07PM -0800, firstname.lastname@example.org wrote:
> This is one way things may go, but I'm not sure the "Thompson hack"
> would be the preferred way to prevent assemblers from building dangerous
> objects. The unique aspect of Thompson's idea was that the compiler
> would recognize when it was compiling itself, and build the hack into
> the new compiler, even though it wasn't present in the source code.
Yeah. What I'm thinking of is using the Thompson hack to ensure that any
assembler created by _other_ assemblers will have some sort of hook for
law enforcement/security monitoring built into it. This raises an
obstacle to anyone trying to use an assembler to build tools suitable
for creating nasties, because they have to build their first-generation
assembler from scratch.
> It does seem reasonable that assemblers would be limited in some ways,
> but this particular method has disadvantages. It relies on secrecy,
> and if everyone knows it's there you might as well do things openly.
> It also is somewhat brittle in that the code can't change much or the
> hack won't recognize where it has to insert itself in the new version.
True: I'm assuming fairly standardized consumer items here, the
products of a mature technology. I'm also assuming that *most* of the
enforcement side of things will take place at the software level, in
whatever nanocomputers control the assemblers.
> I suspect it would be more likely that an assembler (by which I mean
> a large device which might be in a home and used to build clothes,
> furnishings, electronics, etc.) will have a catalog of devices it can
> build, with variations that can be programmed in. Maybe one assembler
> can build another, but you wouldn't have control at the level of making
> changes to remove limitations on what it could build, any more than a
> MUD program lets you drop into assembly language.
And here we hit a terminology problem: what is an assembler?
I think we need to carefully distinguish macroscopic machines (capable
of creating other macroscopic machines by use of molecular-scale
subassemblies) from the molecular-scale assemblers. The security issues
for each are different. Wearing a policeman's hat, my goal for the
macroscopic machines (macros?) would be "(a) stop them being used to
assemble weapons of mass destruction". My goal for the nanoscale machines
(nanos?) would be "(b) stop them from being used to assemble macros
capable of (a)", and a secondary goal of "(c) stop them from assembling
a range of nasty molecules that we really don't want, like botulinum
This gives us three distinct monitoring and prevention problems. And
they do overlap: a MUD isn't _supposed_ to let you drop into assembly
language, but if you have the source code to it you can go looking for
a workable buffer overrun attack that lets you write arbitrary machine
instructions on its heap.
Given the general-purpose capabilities of a macroscopic assembler, I'd
expect that any attempt to market a black-box unit that only registered
designs can be manufactured on will meet with consumer resistance (and
hackers). Hell, people even hack Microsoft Barney! The precedents are
not good; any security mechanism will need to be based on something
much more solid than security through obscurity, or saying "it is a
criminal offense to tamper with an ACME AnyFab machine". That's why
I'm trying to think up a fail-safe approah.
> An interesting example though of where secrecy is used for somewhat
> similar purposes to what you desribe: recently there have been articles
> revealing that color copiers each have a unique digital "fingerprint"
> which they embed into copies. Given a copy it is possible to trace it
> back to the exact machine which made it. This is intended as an anti
> counterfeiting measure. Most people have been unaware of this (although
> rumors have been around for years). See the well respected Privacy
> Forum Digest at http://www.vortex.com/privacy/priv.08.18.
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:03:37 MDT