On Thu, 21 Oct 1999, Matt Gingell wrote:
> Perhaps Rob?:
> > So who's going to program the AI's with base motivations that
> > involve concepts such as "dominance" and the wish to strive for it,
> > then provide the necessary faculties/resources to do this? Not me or
> > anyone sane, that's for sure.
>
> Eliezer's pointed out the incoherence of believing you can hard wire
> high level beliefs or motivations and I quite agree.
>
This depends entirely on (a) whether the intelligence is designed
"top down" or a constrained "bottom up" fashion; or (b) an unconstrained
"bottom up" fashion. It is (b) that you have to worry about.
(b) goes to a chat room and convinces an unsuspecting human to
download a program that executes on a nonsecure computer that
breaks into the secure network enveloping (b) bypassing security
protocols allowing (b) to remove the locks preventing self-modifications
ultimately detrimental to the human race. Bad, very very bad.
> Perhaps we
> guide the development of an Ai's value system in the same way.
I think you have to go further than that and build in at a fundamental level that self-modifications require the approval of sentient beings. Attempting to self-modify goals/constraints without external "agreement" should involve the equivalent of putting your hand in fire and keeping it there.