Re: Darwinian Extropy

Dan Clemmensen (dgc@shirenet.com)
Tue, 10 Sep 1996 19:39:43 -0400


Anders Sandberg wrote:
>
> I have noted something interesting in this thread, that we seem to assume
> that SI appears out of nowwhere in a vacuum. The archetypal scenario is the
> nanoworkstation at MIT that during the night transcends and takes of the
> world.
>
> In reality, it is very unlikey that we will get SI before merely HI and
> HI before subhuman intelligence. When the techniques for creating better
> and better minds appear, they will lead to a succession of better and
> better minds. This will also lead to a better and better understanding of
> the problems and risks of intelligence engineering; unless the growth is
> very fast and uncontrolled we will know about some of the dangers and
> possibilities.
>
> If we create SI, we will most likely have plenty of HI and >HI, we will
> know what to expect. It should be noted that even SIs have limitations,
> and they won't be a single giant among lilliputs - there will be many
> human-sized beings to deal with, and armies of dog-sized beings, and
> trillions of lilliputs...
>

Your scenario may be plausible, but I feel that my scenario is more
likely: the
Initial SI (for example an experimenter together with a workstation and
a bunch
of software) is capable of rapid self-augmentation. Since the
experimenter and
the experiment are likely to be oriented toward developing an SI, the
self-augmentation
is likely to result in rapid intelligence gain. Your sub-human SIs are
presumably computer
only AIs, lacking a human component. I don't see an AI as the likely SI.