Anders Sandberg wrote:
> Actually, we probably need more good studies of pros and cons of
> transhumanists ideas put on the web, to show that we are actually
> looking seriously at these issues. Otherwise people will get
> impression and think we are just naive technophiles.
And yet, wasn't den Otter's point that the real reason why we should do this is not because of the danger that otherwise people might think that we are naive, but because of the actual technological risks? As long as we are just thinking about what it takes to spread the meme, we aren't really serious about what ought to be one half of transhumanism: anticipating and averting threats.
Here is my view: Rather than thinking of transhumanism as pro-technology, let's think of it as pro certain options that advanced technology will offer (life extension etc.). This is fully compatible with focusing a lot of attention on potential risks.
So rather than saying that the problem of malicious nanomachines is an argument against transhumanism, let transhumanist thinking grow to encompass this danger. We can be the ones who talk about the risks and the need to do something about them. We can take the lead in thinking about the downsides and dangers as well as in envisioning the opportunities inherent in technological development. The Foresight Institute has managed to do this in the domain of nanotechnology, but I think we have some way to go to do the same for our field which includes science and technology in general.
A first step might be to separate the true threats (e.g. destructive uses of nanotech or malicious AI), that one should be worrying about, from the much smaller threats or pseudo-threats (e.g. GM food, cloning) that the public at large is worried about.
Step 2 will then be to discuss what strategies would minimize these threats.
Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics