Risk Avoidance

Twink (neptune@mars.superlink.net)
Fri, 21 Nov 1997 18:59:19 -0500 (EST)

At 11:59 PM 11/20/97 +0100, Anders Sandberg <asa@nada.kth.se> wrote:
>I disagree with that it is beyond our comprehension, but caution is
>certainly adviced when doing climatic engineering - always have a
>backup ecosphere. Raising global temperature to the last warm age
>sounds nice, but has to be done with geological speed, otherwise the
>ecosystems won't have the time to migrate as they should (OK, I'm
>biased: I live in Sweden and it is November - I really would like to
>get back to the bronze age climate here!).

I agree about caution BUT let's not go overboard with it. If we had to
investigate all the potential problems with using fire before actually
using, humans would probably still be an important source of food for

>> I guess we need big AI's to help us understand the cause-effect
>> relations in the global climate system, untill then: tread carefully.
>I think you overestimate the climate. It is complex, chaotic and we
>know far too little yet, but it is not something we need big AI for,
>rather very good simulators (taking ecology, astronomy and geology
>into account). Sometimes we transhumanists are a bit too reverent
>about superintelligence - it cannot solve every problem, and is not
>the solution to every problem either.

I do agree that amongst people who are into transhumanism and
related ideas, there is a tendency to answer problems with "as
soon as we have [insert your favorite technologic fantasy here]
all problems will be solved. This is much like the people in the
UFO crowd who believe that once the saucers land all will be
saved. They have no need to really try to solve problems.
Everything can be left on hold until the Great Day. I'm not
pointing this out as an ad hominem to anyone on this list, but as
a word of warning. Technoprogress is not guaranteed and new
tech will bring new problems -- most likely, more exciting and
better ones -- as well as benefits.

In this context, macro-engineering is bound to have large scale
effects. I believe that is the point behind doing it. Most projects
we want -- uplifting, uploading, augmenting, space colonization,
AI, etc. -- to see happen will have large scale effects. The first
fires humans used probably seemed tame, but look at where
that went. I would hope we don't burn down a forest just to
light a campfire, but there is no risk free existence. There are
merely different paths we can take and their associated risks.
To hope AI will be invented and suddenly all the risks will
dissappear is a nice pipe dream.

Daniel Ust