AI motivations & Self-rewiring [was Re: technophobes]

Robert J. Bradbury (
Thu, 21 Oct 1999 22:03:19 -0700 (PDT)

On Thu, 21 Oct 1999, Matt Gingell wrote:

> Perhaps Rob?:
> > So who's going to program the AI's with base motivations that
> > involve concepts such as "dominance" and the wish to strive for it,
> > then provide the necessary faculties/resources to do this? Not me or
> > anyone sane, that's for sure.

I would argue that I can program/evolve an AI agent to screen SPAM (a psuedo-AI activity) and it would have NO motivation for domainating me.

The *fundamental* question is *what* is "intelligence"? If it is the ability to re-write your own program than I would argue that *most* people in the world today are under the mark! If it is the ability to walk, and talk, etc. "like" someone generally considered to be "intelligent" then computers may very well ascend to that level with *simulations* of the motivations that humans have. So long as the expression of those motivations is constrained we are relatively safe.

> Eliezer's pointed out the incoherence of believing you can hard wire
> high level beliefs or motivations and I quite agree.
This depends entirely on (a) whether the intelligence is designed "top down" or a constrained "bottom up" fashion; or (b) an unconstrained "bottom up" fashion. It is (b) that you have to worry about. (b) goes to a chat room and convinces an unsuspecting human to download a program that executes on a nonsecure computer that breaks into the secure network enveloping (b) bypassing security protocols allowing (b) to remove the locks preventing self-modifications ultimately detrimental to the human race. Bad, very very bad.

> Perhaps we
> guide the development of an Ai's value system in the same way.

I think you have to go further than that and build in at a fundamental level that self-modifications require the approval of sentient beings. Attempting to self-modify goals/constraints without external "agreement" should involve the equivalent of putting your hand in fire and keeping it there.

Anything else would be like continuing on the current status quo for millions/billions of years and treating an alternate intelligent species evolved by nature with the attitude of "isn't that nice", not realizing of course that it was more intelligent that we are and as soon as its numbers became great enough we would simply be "cold meat".