"Zero Powers" <firstname.lastname@example.org> writes:
> My fear is that we as a species will not morally evolve fast enough
> to keep up with our technology. We will be the equivalent of 5 year olds
> who have learned to manufacture sub-machine guns. Not a pretty picture.
> I don't know what the solution is, but I'm leaning more and more toward
> leaving the technology in the hands of Big Brother until we as a race are
> mature enough to handle it.
I think this shows one of the major problems in this line of thinking
- because if Joe Q. Public is not to be trusted with a nanoassembler,
how can we then trust Big Brother? If humans are not morally evolved
enough (whatever that means), then the aggregate of humans making up
Big Brother is also not likely to be morally evolved enough. And if a
maniac with nano gives you bad dreams, then try not to dream about a
nasty Big Brother with nano.
There is also another problem here, and that is that "moral evolution"
(whatever that is; I agree that we have seen a trend the last
centuries towards more human-friendly and "nice" moralities, but that
is no proof of an obgoing evolution towards something "good" and might
just be my own particular prejudices talking) might not be
enough. Imagine a world of saints, and one maniac with doomsday nano -
you get the same problem regardless of the average ethical standing of
people as long as the product of avaoilable destructiveness times the
amount of people times the fraction dangerous lunatics becomes large
enough. To remain safe, you need to keep down the technology, which
seems very hard to do, keep down the population, which might be doable
but for other reasons, or keep the fraction lunatics as low as
possible, which is likely even harder than keeping control over
technology. After all, you can be a dangerous lunatic without knowing
it or being apparent to others.
As I see it, this dilemma is important. It is maybe one of the most
worrying socio-philosophical problems of transhumanism. Banning
dangerous technologies seldom works, at least for those that require
sufficiently small investments to develop. Big Brother is not a viable
option because of the risk of abuse. Non-human regulatory systems
like the nanarchy idea debated in the Good Old Days on this list has
the same problem. It is not clear that balancing a powerful enough BB
with a transparent society of little brothers is
possible. Self-regulation probably works better than most people
realise, but only against threats that do not wipe out the whole
system the first time they are used.
Probably the only viable solution is to make sure the system can
survive huge disasters, that makes these disasters something that can
be handled at least partially. Space colonisation is a good first
step, but currently too expensive to do as a security precaution. With
enough nano it is cheaper and viable, but then we have a window of
opportunity between the development of space-applicable nano and the
sufficient spread of dangerous technology to make catastrophic
destruction likely whose width is hard to estimate.
Making people saner, putting safeguards into place, having an open
debate about issues and applying the technology for protection (such
as active shields for nano) might not be enough, but it buys us time
and maneouvering room.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! email@example.com http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:09:05 MDT