Anticipatory backfire

From: Mitchell Porter (mitchtemporarily@hotmail.com)
Date: Thu Nov 08 2001 - 09:35:28 MST


Is there a name for dangers or catastrophes which are brought about
as a result of an attempt to anticipate and defend against them?
I've thought of two examples of this, specific to 'ultratechnology'
- one involving nanotech, the other AI.

The nano example is simple: to be able to defend against the full
range of possible hostile replicators, you need to explore that
possibility space, and doing so only via simulation may simply
be impractical (for mere humans, anyway). So one needs to conduct
actual experiments in sealed environments - but once you start
making the things, there's the danger that they'll get out somehow.

The AI example: this time one wants to be able to defend against
the full range of possible hostile minds. In this case, making a
simulation is making the thing itself, so if you must do so
(rather than relying on theory to tell you, a priori, about a
particular possible mind), it's important that it's trapped high
in a tower of nested virtual worlds, rather than running at
the physical 'ground level'. But as above, once the code for such
an entity exists, it can in principle be implemented at ground
level, which would give it freedom to act in the real world.

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:18 MDT