Re: Thinking about the future...

Dan Clemmensen (dgc@shirenet.com)
Wed, 04 Sep 1996 19:03:20 -0400


Eric Watt Forste wrote:
>
> At 4:47 PM 9/3/96, Dan Clemmensen wrote:
> >My hope is that the SI will develop a "morality" that includes the
> >active preservation of humanity, or (better) the uplifting of all
> >humans, as a goal. I'm still trying to figure out how we (the
> >extended transhumanist community) can further that goal.
>
[
SNIP of a worthwhile discussion of part of an SI moral basis]

> Consider that an SI, as such, can only compute. If it wants to *do*
> something (and presumably if all it wanted to do was sit and compute, it
> would present no threat to human beings), it would need to build a vast
> network of sensorics and motorics. But instead of wasting time and
> resources on this intermediary means-project, it could work directly toward
> its ends by using the five billion sophisticated supercomputers (with their
> attached sophisticated sensorics and motorics) that we call human beings.
> It could also use the vast and inaccessible (except through the market)
> database of local information about resources that might be useful toward
> achieving its ends, but this information is lodged in human brains, and
> there is no effective way to get *all* of it out in a useful way except
> either (1) to use the market and its system of price signals or possibly
> (2) uplift all human beings.
>

In the same way that I believe humanity has only short-term utility to
the SI as a
knowledge resource, I also believe that humanity has only short-term
utility
as a sensory-motor resource. building a vast sensor-motor resource is
straightforward,
given nanotechnology or even a more conventionally-constructed set of
general
-prupose robots. To use humans for this purpose entails the same kind of
interactions
humans use with each other, such as contracts, mangagment, etc. A
self-augmenting
SI should achieve higher efficiency than this in a matter of a few
weeks.
If I were part of the decision-making part of the first SI, I'd start
by
augmenting the computer part of myself to increase my intelligence. I'd
then use
my increased intelligence to 1) take control of the finincial system (or
at least
of a serious amount of money) and 2) solve the remaining engineering
problems
associated with practical nanotech. I'd then send purchase orders to
machine tool
manufacturers, hire humans, and build the initial nanotech assemblers.
>From there,
I'd be able to self-replicate the namotech to build whatever
sensor-motor system I
want, and also to add additional computing capacity as needed. This SI
would
have more "intelligence" and more sensors and effectors than all of
humanity
combined, in a matter of weeks. The only thing missing is the "knowledge
base",
the bulk of which is available via "ab initio" methods and much of the
rest of
which is available via uploading of consenting humans. If my current
morality contributes to the morality of the SI, then the SI will try to
actrively preserve
the humanity of those humans who so desire, but I cannot justify this by
any
utilitarian argumant. I'm hoping that the SI's intelligence and
knowledge will
result in its discovery of a compelling reason to be nice to humanity.

Note: I think the SI will come into existance within a decade, probably
as a human-computer collaboration which initially augments itself by
using the internet.