Re: Thinking about the future...

Dan Clemmensen (dgc@shirenet.com)
Wed, 04 Sep 1996 19:42:10 -0400


QueeneMUSE@aol.com wrote:
>
> In a message dated 96-09-03 20:18:53 EDT, Dan wrote:
>
[mega-SNIP]
>
> > >My hope is that the SI will develop a "morality" that includes the
> > active preservation of humanity, or (better) the uplifting of all humans,
> > as a goal. I'm still trying to figure out how we ( the extended
> > transhumanist community) can further that goal.
>
> >>
> YES!
> Built in,unreprogrammable morals! Hmmm....lets see, >H computer ethics 101,
> where do i sign up? : - )
>
> Nadia Reed Raven St Crow

Your response makes the usual assumption that the SI will come into
existance by
a careful process of design and manufacture by thoughtful, benign,
brilliant
humans. My model is that the SI will wake up and begin its
self-augmentation
using existing resources on the internet, without much in the way of
explicit
design. Some experimenter using the latest release of some decision
support system,
plugging the new set of inference rules into the CYC database, while
using a hot
new data visualization tool, will begin thinking about how to build a
new
inference rule generator ( or something). (My actual model is that some
computer
nerd at MIT will do this while drinking Jolt cola at 2 AM when he should
be studying
for an english exam.) As a joke, the nerd will type in the command "make
yourself smarter."
The then-current rule set will be smart enough to act on the command but
too stupid
to get the joke.

the "unreprogrammable" part is another problem. It's really hard to see
how to implement
that one.