Re: Thinking about the future...

Anders Sandberg (nv91-asa@nada.kth.se)
Tue, 27 Aug 1996 10:42:34 +0200 (MET DST)


On Mon, 26 Aug 1996, Max More wrote:

> If they need us for doing things physically, we would still have a strong
> position. Nevertheless, powerful SI's in the computer networks, could exert
> massive extortionary power, if they were so inclined. So I still think it
> important that SI researchers pay attention to issues of what values and
> motivations are built into SIs.

I agree that it is important to think of what values or motivations SIs
have, but they might be hard to code. I like David Brin's idea (in some
of his short stories, like "Lungfish") that AI could be brought up a bit
like human children and thus acquire human values. An AI living in the
Net might have a very real chance of identifying with the struggle
against net censorship (would limit it) and against Microsoft (might be
incompatible to its inhomogenous distributed mind).

It should be noted that most likely we will not see any classical
man-SI conflict a la "The Forbin Project", but it will be completely
different. After all, any SI could deduce that humans react irrationally
to extortion.

-----------------------------------------------------------------------
Anders Sandberg Towards Ascension!
nv91-asa@nada.kth.se http://www.nada.kth.se/~nv91-asa/main.html
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y