Re: How can we deal with risks?

Anders Sandberg (asa@nada.kth.se)
30 Oct 1997 14:21:17 +0100


Holger Wagner <Holger.Wagner@lrz.uni-muenchen.de> writes:

> I actually don't want to sound pessimistic in my first posting to this
> list (just today, I've read the article about dynamic optimism which I
> found pretty good), but there's something I have been thinking about
> since I first found out about Extropy - but so far, I couldn't find an
> answer.

Welcome! If an idea is valid, then it should be considered even if it
sounds pessimistic; only passive optimists and pessimists reject ideas
because of their mood.

> 1) Today, humans are by no means perfect. They have a certain idea of
> what they do and what the consequences are, but it happens quite often
> that something "impredictable" happens. If you apply changes to the
> ecologic system, can you really predict what consequences this will have
> in the long run - and would you take the responsibility?

We cannot predict the long-term consequences of our
actions. Period. But we can do our best to avoid bad effects, and I
think we are responsible for all our actions.

> Possible solution: I assume that most scientists are very intelligent,

As a scientist, I'm flattered but unfortunately this isn't very true
(in fact it is a remain of the old romatic idea that artists and
scientists are set apart; in reality scientists are fairly ordinary
people in general).

> so they should understand stuff like pancritical rationalism and should
> be able to apply this to their work. By doing that, they at least
> improve the chance of not doing anything that has extremely bad results
> in the long run.

This is what happens today, although PCR is not used as much as it
ought to. It is just that we always hear about the disasters and not
about the millions of potential disasters that are prevented.

> Usually, it's the innovator who decides whether something should be
> invented or not, right?

Innovations, yes, but most discoveries are accidental or unexpected,
and most technology is developed because somebody for some reason
funds it (be it the market, an individual or a group like the
military).

> 2) Today, humans are by no means perfect. While I trust scientists to
> have at least a vague idea of what they're doing, I do not trust people
> in general.

Thanks again, but I think you should nuance your position a bit
more. I think one can trust all people (with a few exceptions) to some
extent; being a scientist doesn't per se make you more trustworthy,
just as being a government official doesn't per se make you less
trustworthy.

> Solution: Educate people accordingly. (easy to say - but I don't believe
> it's possible until I see world-wide results).

This is actually the solution to many other problems, like poverty and
the spread of some bad memes.

> If you want to overcome the fear most people have of technology, you
> need to solve these problems and make people understand the solutions.

A deeper understanding of the problems might do too - I know some
people who react with "try harder!" when I explain that we cannot
predict the long term consequences of technology even in principle. It
would be useful if we could make people think more and better about
these issues.

> The major problem with this is that I can improve myself, but I can't
> improve the rest of the world

It is a good start to improve oneself and act as a catalyst for
improvement in others.

- and only one insane person can do great

> But what if there comes a day where we have to face an insane fool who
> has the technology to wipe out all life on this planet? Someone who just
> doesn't understand or just doesn't care about the responsibility?

This is a real problem, since even if we solve the problems you
discuss, there may always be somebody who is deranged or clumsy enough
to mess things up. Even without sociopaths and zealots there will be
accidents and people who do dangerous things for good reasons.

The solution is likely to empower people in general, so that it will
be hard for dictators, fanatics and error to overcome
all. Unfortunately this only works if the technologies are not of the
kind that the first use wins (this was the cause of our major
nanotechnology debate a few months back; was nanotech inherently such
a technology, or could even deliberate nasty nanodevices be
contained?). In this case dispersal seems to be the only strategy.

> o________________/ Trevor Goodchild in "Aeon Flux" |

"That which does not kill us makes us stranger" :-)

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y