Re: Darwinian Extropy

Anders Sandberg (
Tue, 10 Sep 1996 18:54:53 +0200 (MET DST)

I have noted something interesting in this thread, that we seem to assume
that SI appears out of nowwhere in a vacuum. The archetypal scenario is the
nanoworkstation at MIT that during the night transcends and takes of the

In reality, it is very unlikey that we will get SI before merely HI and
HI before subhuman intelligence. When the techniques for creating better
and better minds appear, they will lead to a succession of better and
better minds. This will also lead to a better and better understanding of
the problems and risks of intelligence engineering; unless the growth is
very fast and uncontrolled we will know about some of the dangers and

If we create SI, we will most likely have plenty of HI and >HI, we will
know what to expect. It should be noted that even SIs have limitations,
and they won't be a single giant among lilliputs - there will be many
human-sized beings to deal with, and armies of dog-sized beings, and
trillions of lilliputs...

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y