Re: FAQ Additions (Posthuman mind control)

den Otter (neosapient@geocities.com)
Wed, 24 Feb 1999 15:29:44 +0100



> From: hal@rain.org

> Billy Brown, <bbrown@conemsco.com>, writes:
> > Suppose my parents are religious, and they feel that anyone who is not a
> > member of their faith will suffer eternal damnation. Consequently, at an
> > early age they have be fitted with neural implants that will ensure I
> > fervently believe in their faith, that I will never violate its tenets, and
> > that I am incapable of ever changing this belief system.
>
> A very interesting question, independent of the posthuman question. What
> degree of control is moral for parents to exert over their children? More
> generally, what is moral when creating a new intelligence?
[snip]
> Generalizing the case Billy describes, if the universe is a dangerous
> place and there are contagious memes which would lead to destruction, you
> might be able to justify building in immunity to such memes. This limits
> the person's flexibility, but it is a limitation intended ultimately to
> increase his options by keeping him safe.
>
> How different is this from the religious person who wants to keep
> his own child safe, and secure for him the blessings of eternal life?
> Not at all different. Although we may not agree with his premises,
> given his belief system I think his actions can be seen as moral.

Maybe, but making someone a religious nut is ultimately always bad for himself and society, so rational people would have every moral right to (try to) remove the religious programming from the child's mind and replace it with memes for critical thinking. Personally, I'd have no problem with outlawing the indoctrination of children with anything else than rationalism and critical thought. Given the damage that religion and related dogmas have done to individuals (life-long mental scars, insanity, false guilt etc.) and society (oppression, stagnation, holy wars, mass hysteria etc.) I think this in itself somewhat coercive measure could be fully justified. Ultimately it would improve everyone's situation, after all.

However, this certainly doesn't mean that *all* future intelligences should be maximed for critical thinking, only those that are intended to become "peers". Lower "servant" AIs should be made so that they either are completely emotionless like your PC for example (preferable), or programmed to be unconditionally happy with what they do. As long as they don't suffer, this would certainly be "moral".