Re: Thinking about the future...

Dan Clemmensen (
Wed, 04 Sep 1996 19:13:32 -0400

Eric Watt Forste wrote:
> If the superintelligence is ambitious enough and powerful enough to be a
> true value pluralist, to find it boring and trivial to undertake any course
> of action other than the simultaneous execution of many different
> orthogonal projects in realtime (and I think this does describe the
> behavior we observe in many of the brightest human beings so far), then I
> don't think we'll have too much to fear from it. Perhaps I'm being an
> ostrich about this, but I'd love to hear some solid counterarguments to the
> position I'm taking.
My simplest example is an SI that deems that its goals are best served
a maximal increase in its computing capacity. I can think of many such
This SI may choose to convert all available resources into computing
wiping out humanity as a trivial side effect.