Interesting questions being brought up on these two threads.
I would say that it is possible to create a non-upgrading AI that incorporated some form of motivational bars in it's programming. True, depending on how it were done, there may be times when this would lead to unstable behavior. However, the vast majority of AI's would likely be on our side, as opposed to a few mad AIs. However, I agree that in the case of a seed AI, anything like Asimov's Laws would either be capable of being easily discarded, or would possibly cause severe problems when trying to upgrade.
I think the real answer is to give seed AIs predispositions, which would be broad,and then expose the AI to enough of human thought and culture and experience so that the AI would be able to empathize to some extent with humanity. What I am saying is that working with a seed AI we need to cultivate a conscience in the machine.
I have some more thoughts on the question, but I must run and take care of a patient. Take care all.
Glen Finney