I agree about caution BUT let's not go overboard with it. If we had to
investigate all the potential problems with using fire before actually
using, humans would probably still be an important source of food for
leopards.:)
>> I guess we need big AI's to help us understand the cause-effect
>> relations in the global climate system, untill then: tread carefully.
>
>I think you overestimate the climate. It is complex, chaotic and we
>know far too little yet, but it is not something we need big AI for,
>rather very good simulators (taking ecology, astronomy and geology
>into account). Sometimes we transhumanists are a bit too reverent
>about superintelligence - it cannot solve every problem, and is not
>the solution to every problem either.
I do agree that amongst people who are into transhumanism and
related ideas, there is a tendency to answer problems with "as
soon as we have [insert your favorite technologic fantasy here]
all problems will be solved. This is much like the people in the
UFO crowd who believe that once the saucers land all will be
saved. They have no need to really try to solve problems.
Everything can be left on hold until the Great Day. I'm not
pointing this out as an ad hominem to anyone on this list, but as
a word of warning. Technoprogress is not guaranteed and new
tech will bring new problems -- most likely, more exciting and
better ones -- as well as benefits.
In this context, macro-engineering is bound to have large scale
effects. I believe that is the point behind doing it. Most projects
we want -- uplifting, uploading, augmenting, space colonization,
AI, etc. -- to see happen will have large scale effects. The first
fires humans used probably seemed tame, but look at where
that went. I would hope we don't burn down a forest just to
light a campfire, but there is no risk free existence. There are
merely different paths we can take and their associated risks.
To hope AI will be invented and suddenly all the risks will
dissappear is a nice pipe dream.
Daniel Ust