Re: Nanotech Restrictions (was: RE: Transparency Debate)

From: phil osborn (philosborn@hotmail.com)
Date: Sat Apr 08 2000 - 23:30:16 MDT


>From: Adrian Tymes <wingcat@pacbell.net>>Subject: Re: Nanotech Restrictions
>(was: RE: Transparency Debate)
>Date: Sat, 08 Apr 2000 12:38:33 -0700
> > I am working on systems to bring such a society about, i.e., social
> > infrastructure, such as an explicit universal social contract.
>
>I've been working on a system that seems similar - perhaps you could
>tell me if it is?
>I call it "enlightened greed", and it is driven by the principle that
>the correct action is that which will benefit me most in the long term.
>For example, longevity research is a good thing in part because it will
>result in practices and objects that will allow me to live longer, but
>the easiest way to get it is by enouraging this research for application
>to the general public: when the benefits are so applied, then I, as a
>member of the public, can take advantage of it. Therefore encouraging
>longevity research so that anyone who wants to can live longer is the
>correct approach for me to take.
>
>The important thing is that nowhere in the decision is there
>consideration of what would be good or bad for anybody else, save how
>that may eventually affect myself. And yet the more I tinker with it,
>the more it seems to come up with ends that, under more common ethical
>measures, are labeled "noble" or "altruistic" - which are qualities not
>usually associated with greed.
>
> > In a contractual society, in which - given my other assumptions -
> > presumeably all rational people could be convinced of the advantage of
>being
> > ethical, a lot of the really bad scenarios that would otherwise require
> > draconian measures, such as a police state to stop nanoterrorists, would
>be
> > limited to a probably small minority of very irrational people, who
>would
> > likely be identified fairly early as creating high risks for anyone who
> > dealt with them long term. Thus, those people would be watched and
>would
> > probably pay high insurance premiums and find themselves unwelcome in
>many
> > social settings.
>
>But, how do you make sure that the criteria for "watch" and "pay high
>insurance premiums" only targets those who would have a negative effect
>on society? Nanoterrorists are a subset of people who know how to use
>nanotech, but if it is far easier/cheaper to identify nanotech users
>than just malicious nanotech users, how do you avoid alienating those
>who would use nanotech to benefit society? (Note that this alienation
>may, in some cases, turn those who would otherwise work for everyone's
>benefit into revenge-driven terrorists.) This might work if you could
>get around that problem.

The primary incentives to engage in surveillance would be for those who
would benefit from doing the surveillance. This would include, among
others, your local neighborhood watch group equivalent, the insurance
companies for certain, freelancers going for rewards posted by insurance
companies, and, of course, nosy people and people who are out to get you
personally.

If you are not doing anything that generates uncovered - by bond, insurance
or prior contract - risk, then you have nothing to fear. I suspect that the
general social contract would evolve much like the common law in that
restrictions on the use of information gained via spying might arise through
case law, if nothing else.

Also, the combined community of intellectual property producers might write
an addendum to the social contract that specified rules and penalties for
inappropriate release of confidential information. No one could be forced
in general to sign, but they might not find many welcome mats if they
didn't. There are many possible routes and many variations that could arise
via private or public negotiations among the parties. Religious groups, for
example, might include special clauses in a sub-contract that bound only
their self-chosen membership to those rules of diet or sexual relations,
etc.

If you are actively engaged in nanotech, then you probably can assume that
you will be watched from several quarters. The level of surveillance would
be determined in general by the market response. Insurance companies
specialize in making these kind of calculations, so I would imagine that the
risk - of nanoterrorism, etc. - would never be zero, just calculated to
maximize their profits long term. This might not be sufficient, if their
judgement was faulty, but then survival is never absolutely guaranteed. A
micro black hole could come flying thru right now and punch a hole through
your brain, but it's probably not worth taking special countermeasures at
this point - whatever they might be(???).

Re ethics. The pure benefit analysis just doesn't quite do the trick, I'm
really sorry to say (as explicated really nicely by Xerene and Strackon in a
late '60's Invictus article). It provides plenty of reasons to look like
you're moral and to encourage other people to be moral, and it even rules
out casual dishonestly on the grounds of the additional mental processing
involved in lieing, as David Friedman discussed in an article in "Liberty,"
but it doesn't address the professional criminal at all.

On the other hand, the real reason most people are moral is that they value
visibility and emotional openness - something like the old, rarely-heard
concept of honor. They despise having to live a lie, hide psychologically
like a rat, etc. These costs - of living a criminal life - can be quite
devastating. Check out "The Talented Mr. Ripley," if you haven't already.

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:09 MDT