RE: FAQ Additions (Posthuman mind control)

Nick Bostrom (bostrom@ndirect.co.uk)
Thu, 25 Feb 1999 01:30:03 +0000

Billy Brown wrote:

> If you simply want to make sure they think morality is important, and that
> they start out with a solid understanding of human ideas about morality,
> then I think you've got the right idea. If you want to devote special
> effort to ensuring that a few core ideas, like "don't kill people without a
> really, really pressing reason" are given a high level of priority, that is
> only reasonable. I think there are real dangers in getting overambitious
> about this sort of thing, but that's a different issue.

What I would like to see is that they are given fundamental values that include respect for human rights. In addition to that, I think it would in many cases be wise to require that artificial intelligences that are not yet superintelligences (and so cannot perfectly understand and follow through on their fundamental values) should be built with some cruder form of safeguards that would prevent them from harming humans. I'm thinking of house-robots and such, that should perhaps be provided with instincts that make it impossible for them to do certain sequences of actions (ones that would harm themselves or humans for example).

> Perhaps the line could be re-worded to avoid this confusion? Something like
> "In the first case, we could make sure that they possess a thorough
> understanding of, and respect for, existing human moral codes that emphasize
> respect and toleration of others." would seem to convey your intended
> meaning without the possibility of confusion.

Understanding is not enough; the will must also be there. They should *want* to respect human rights, that's the thing. It should be one of their fundamental values, like caring for your children's welfare is a fundamental value for most humans.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk Department of Philosophy, Logic and Scientific Method London School of Economics