In a message dated 99-11-24 22:11:28 EST, you write:
<< Actually, I don't think we mean different things when we use the term "upgrading," but rather different things when we use the term "laws." In Asimov's short stories, the robot CANNOT break the rules. It's not that a robot generally tends not to break the rules, with the exception of a few psychopaths and criminals.>>
<<Indeed, your analogy to humans is telling, because, while humans do
generally tend not to murder one another (though even this may be argued),
many many of them still do anyway.>>
It's not just an analogy. I consider humans to be in that gray zone between upgrading and nonupgrading intelligence. We're not designed for it, but neither are we completely constrained in our actions. It may be hard to overcome our programming, but humans are able to do so.
<<Sure, we could probably program in a tendency to follow the laws, but
probably not a very strong one, and NO tendency that all or even most
individuals couldn't work around given only a tiny bit of thought.>>
I'd say the real question isn't, "Could the laws be broken?" but rather, "Will the laws be broken?" Anything is possible, but can we influence the probability of it to the point of it being negligible? I think there is the possibility that we might, certainly in AI's which don't generally upgrade, and to a lesser extent in those that do.
Glen Finney