Re: Nanotech Restrictions (was: RE: Transparency Debate)

From: Adrian Tymes (wingcat@pacbell.net)
Date: Sat Apr 08 2000 - 13:38:33 MDT


phil osborn wrote:
> From: Adrian Tymes <wingcat@pacbell.net>
> >The pursuit of knowledge and true power makes one less interested in the
> >pursuit of power over people?
>
> This line of argument is probably one of those more likely to yield real
> results. The same problem is faced in bringing up baby SI's. Either there
> is or there isn't rational grounds for something that we would recognize as
> morality, as in ethics - such as, the non-aggression principle.
>
> The other part, without getting into any of the details, is that the
> rationality of being ethical may in fact be contextual. Eg., lieing in a
> society in which aggression is rare and one is not being threatened for
> telling the truth may be unethical and immoral, whereas lieing as a Jew in
> NAZI Germany may be morally correct. In fact, in societies such as that, a
> rational ethics may be so impossible to derive on a moment by moment basis,
> that the processing costs outweigh any possible benefits, and crude, first
> approximations and main chances rule.
>
> If these two general perspectives are accurate, then the questions become
> related to what would be - in ideal circumstances - a rational ethics, and
> what kind of society can support it. I believe that these are two of the
> main, if not most important questions, and, having partially resolved both,
> I am working on systems to bring such a society about, i.e., social
> infrastructure, such as an explicit universal social contract.

I've been working on a system that seems similar - perhaps you could
tell me if it is?

I call it "enlightened greed", and it is driven by the principle that
the correct action is that which will benefit me most in the long term.
For example, longevity research is a good thing in part because it will
result in practices and objects that will allow me to live longer, but
the easiest way to get it is by enouraging this research for application
to the general public: when the benefits are so applied, then I, as a
member of the public, can take advantage of it. Therefore encouraging
longevity research so that anyone who wants to can live longer is the
correct approach for me to take.

The important thing is that nowhere in the decision is there
consideration of what would be good or bad for anybody else, save how
that may eventually affect myself. And yet the more I tinker with it,
the more it seems to come up with ends that, under more common ethical
measures, are labeled "noble" or "altruistic" - which are qualities not
usually associated with greed.

> In a contractual society, in which - given my other assumptions -
> presumeably all rational people could be convinced of the advantage of being
> ethical, a lot of the really bad scenarios that would otherwise require
> draconian measures, such as a police state to stop nanoterrorists, would be
> limited to a probably small minority of very irrational people, who would
> likely be identified fairly early as creating high risks for anyone who
> dealt with them long term. Thus, those people would be watched and would
> probably pay high insurance premiums and find themselves unwelcome in many
> social settings.

But, how do you make sure that the criteria for "watch" and "pay high
insurance premiums" only targets those who would have a negative effect
on society? Nanoterrorists are a subset of people who know how to use
nanotech, but if it is far easier/cheaper to identify nanotech users
than just malicious nanotech users, how do you avoid alienating those
who would use nanotech to benefit society? (Note that this alienation
may, in some cases, turn those who would otherwise work for everyone's
benefit into revenge-driven terrorists.) This might work if you could
get around that problem.



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:09 MDT