Billy Brown wrote:
> Here we have the root of our disagreement. The problem rests on an
Its actually rather straight forward. There are well
Thus, any moral directives we hardwire into an AI it will consider
> implementation issue that people tend to gloss over: how exactly do you
> ensure that the AI doesn't violate its moral directives?
This brings up the subject of limits. As extropians, we beleive in there being little or no limits on human beings, outside of a limit on interfering with others harmfully. We must ask, "Does this sort of moral engineering fit with extropy?" I say it does, for only one reason. We are talking about design specs of beings not yet in existence, much as we could talk about possible genetic codes of children we might have. We are not talking about altering beings already in existence. Altering beings already in existence, against their will, is obviously against extropy. Altering the design of a being not yet in existence is not against extropy. Once a design altered individual comes into existence, its sense of self is derived from its design. That we were able to finely control what type of individual came into existence is no more against extropy than in controlling what the genetic code of our children will be.
Mike Lorrey