Re: AI and Asimov's Laws

Delvieron@aol.com
Thu, 25 Nov 1999 07:56:24 EST

In a message dated 99-11-24 22:11:28 EST, you write:

<< Actually, I don't think we mean different things when we use the term "upgrading," but rather different things when we use the term "laws." In Asimov's short stories, the robot CANNOT break the rules. It's not that a robot generally tends not to break the rules, with the exception of a few psychopaths and criminals.>>

Rules can always be broken, just some are harder than others. The question is could we program an AI in such a way that it would never even try to behave in a certain way. I think the answer is yes, and that such an AI could still be intelligent; just very focused. However, in an AI designed to improve itself constantly, such restrictions would not work. Either 1) they wouldn't be strong enough to defeat the motivation to improve or 2) in certain situations they would limit the AI's ability to restructure itself. I agree that an AI designed to upgrade must in the end be free. But that doesn't mean we can't influence it.

<<Indeed, your analogy to humans is telling, because, while humans do
generally tend not to murder one another (though even this may be argued), many many of them still do anyway.>>

It's not just an analogy. I consider humans to be in that gray zone between upgrading and nonupgrading intelligence. We're not designed for it, but neither are we completely constrained in our actions. It may be hard to overcome our programming, but humans are able to do so.

<<Sure, we could probably program in a tendency to follow the laws, but
probably not a very strong one, and NO tendency that all or even most individuals couldn't work around given only a tiny bit of thought.>>

I'd say the real question isn't, "Could the laws be broken?" but rather, "Will the laws be broken?" Anything is possible, but can we influence the probability of it to the point of it being negligible? I think there is the possibility that we might, certainly in AI's which don't generally upgrade, and to a lesser extent in those that do.

Glen Finney