Nick Bostrom wrote:
And others have posted similar thoughts.
Guys, please, trust the programmers on programming questions, OK?
> I think the trick is not to use coersive measures, but rather to
> wisely select the values we give to the superintelligences, so that
> they wouldn't *want* to hurt us. If nobody wants to commit crimes,
> you don't need any police.
Now, in the real world we can't even program a simple, static program without bugs. The more complex the system becomes, the more errors there will be. Given that a seed AI would consist of at least several hundred thousand lines of arcane, self-modifying code, it is impossible to predict its behavior with any great precision. Any static morality module will eventually break or be circumvented, and a dynamic one will itself mutate in unpredictable ways. The best that we can do is teach it how do deduce its own rules, and hope it comes up with a moral system requires it to be nice to fellow sentients.
Billy Brown
bbrown@conemsco.com