Re: Controlling AI (was: Thinking about the future...)

Anders Sandberg (
Mon, 26 Aug 1996 14:32:02 +0200 (MET DST)

On Sun, 25 Aug 1996, Peter Voss wrote:

> I'll die trying! That's why I think it's crucial that we have major
> breakthroughs in philosophy, ethics and psychology before AI outsmarts us
> totally. If we can figure out what the purpose of lives is, how we determine
> values and how to motivate ourselves in a way that will achieve our goals,
> then we have a chance of developing AI that shares our purposes. It seems
> that AI and AL (life) will also have some sort of basic pain/pleasure
> motivator and some preassigned goals.

It is very hard to give AI preassigned goals; if it is flexible enough or
evolves, it will probably change their goals. But if there are some
evolutionary stable goals/purposes of life, then it is likely that
autoevolving systems will also move towards them - unless the rules for
evolution in memetic evolution have a cruicial difference from the rules
of evolution in genetic evolution (which has created us and most of our

> Another strategy is to develop AI firstly as an extension to our own
> minds, to give us extra knowledge, IQ and creativity before AI gets too
> autonomous.

This is ideal, and probably the best way to become cruicial to the super
AIs if they ever develop. If AI starts out as extensions of our minds
instead of separate systems, they will become interdependent and we will
essentially hitch a ride into a posthuman world as an old but important
subsystem (a bit like the brainstem - try surviving without one!).

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y