Re: Ebola

From: Brian Atkins (brian@posthuman.com)
Date: Mon Aug 07 2000 - 18:34:27 MDT


One possible solution is being pursued directly by SIAI- jump past the
nanotech stage directly to superhuman intelligence. Vote with your donation
dollars, but I feel this is a better approach than trying to develop what
will likely be ineffective means of nanotech/biowar defense. Actually I
donate to both Foresight and SIAI... hedging my bets I guess.

Jason Joel Thompson wrote:
>
> ----- Original Message -----
> From: "Alex Future Bokov" <alexboko@umich.edu>
>
> > We as a civilization are using centralized, hierarchical tools (laws,
> threat
> > of military force) to fight decentralized, distributed threats. The
> question
> > is, how can the system be rigged such that anybody working in their
> basement
> > can also help *prevent* biowarfare? A couple of possibilities immediately
> > spring to mind--
>
> Ken Clements wrote:
>
> >There is a
> > corollary of the Law of Large Numbers called the "Some Nut Theory" which
> state that "In a
> > sufficiently large population, no matter what it is, there is some nut out
> there who will
> > try it." Unfortunately, this implies that when the technological cost of
> whacking your
> > first megaperson gets down to $1.50 and two box tops, a fair number of us
> are going to get
> > whacked.
>
> Our individual power is increasingly at a rapid rate. As already noted, it
> won't be long before it is trivial for any one individual to get a hold of
> the tools necessary to do lots of damage to lots and lots of other people.
>
> I've thought about possible solutions, but they all point in relatively
> undesirable directions.
>
> For instance, the only way I can imagine preventing certain types of
> mega-nano-crimes is to predict the intention. But I don't really like the
> possible implications of this sort of mental 'eavesdropping.'
>
> For those of you who've read 'The Diamond Age,' by Neal Stephenson you'll be
> familiar with the 'nano y nano' scenario. Unfortunately it doesn't appear
> possible to create perfect 'anti-bodies' and it has always been easier to
> destroy something than to prevent against destruction. It lies in the hands
> of the defender to predict against all possible attacker vectors, now and in
> the future, and I don't see how that will ever be possible.
>
> Again, I believe the step towards increased security lies in 'getting ahead'
> of the act and intercepting the intention. Since the advantage lies in the
> hands of the attacker, it should necessarily follow that when your brain is
> wired to the network, the tools to hack it will always be one step ahead of
> the tools to shield it. (Barring unbreakable encryption-- of which I am an
> advocate.)
>
> I think some variant on the 'active defense' is going to be necessary in any
> case.
>
> Thoughts?
>
> --
>
> ::jason.joel.thompson::
> ::wild.ghost.studios::



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:35:41 MDT