Nick Bostrom wrote:
> That looks like the kind of comment for which CritLink would be
I CritLink'd it, but bear in mind that I didn't see any reference from your page to the CritLink version.
I would also suggest that, after the CritLink version has been around for a while, you take any comments that the reader deserves to hear and incorporate them into the body of the FAQ.
> Actually, in his latest book "Robot", Moravec explicitly proposes
> that laws require that robots be built "securely nice in the first
> place". "Every neuance of their motivation is a design choice. They
> can be constructed to enjoy the role of servant to humankind." And it
> is a "matter of life and death to humans" that they "do not have a
> right to vote on the laws that govern and tax them." (pp139-40). In
> the long run, Moravec looks forward to advanced robots superseding
> humans and taking over the universe, but we both can and should
> ensure a comfortable retirement for humanity by programming in
> suitable "internal laws" in our mind children.
Well, I suppose that shows the need for programmatic humility in such discussions. "Every nuance of their motivation is a design choice." Probably didn't occur to him at all that Interim goals could and would come into existence on their own. Probably wouldn't have taken any precautions. I wonder what would have happened when conflicts arose?
My statement was based on Moravec's review of _Engines of Creation_ in _Technology Review_ (1986):
"Why should machines millions of times more intelligent, fecund, and industrious than ourselves exist only to support our ponderous, antique bodies and dim-witted minds in luxury?"
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.