Hal Finney wrote:
> How do you define a moral system? Then, how do you define "absolute
> Consider an ALife creature. Faced with a situation, it has a choice of
> actions, and an algorithm to choose an action.
Actually, this is almost exactly the way I define moral systems. I even considered writing an ALife called "HappyApplet" to put everything down in code, but put it off once I realized just how much basic architecture was needed to capture the Singularity logic. (At the very least, I would need a tightly integrated theorem prover and goal hierarchy.) You can find more on HappyApplet in the section on "Interim Goal Systems" in "Coding a Transhuman AI".
It doesn't answer your question, but it amplifies your definition of the problem.
> There are obviously a multitude of possible moral systems. If we think
> of the moral system as an algorithm which, given a situation and a list
> of actions, produces a rank ordering of the actions, then there would
> probably be potentially an infinite number of moral systems, since there
> are an infinite number of computer programs.
> How do we single out the one or ones which represent "absolute morality"?
(The phrase I use is "objective morality" or "External morality".)
> What does this concept mean? I can't get a grip on it. Is there
> really such a thing?
Is there really such a thing? The only honest answer is "I don't know". I could tell you how the "objective morality" is singled out, however. Unfortunately, this gets into some pretty deep issues, down into the qualia again, and I'm running out of time. Again, however, the quick explanation is by the analogy between "morality" and "reality".
Out of all the possible descriptions of the world that the ALife might choose, which one is "true"? The one that corresponds to his perceptions? But we know from our own investigation that perceptions are often wrong. The problem that Hughes points out is that there's a basic disconnect between perceptions and reality; we can know the perceptions exist, but the idea of reality is a wild speculation. And why this strange idea of "reality" in the first place? Why, exactly, does anything exist at all? Why this peculiar delusion that one Turing machine is "real" while the rest are not? What distinguishes "true" perceptions from "false" perceptions? How is one set of opinions, out of all the opinions there are, singled out as "correct"? And, considering that any statement requires another statement to prove it, how can we believe anything without wandering into an infinite regress? And how do we get from there to our absolute belief in qualia, the "I think therefore I am", that apparently ties the whole meta-infinite regress into a single knot?
I don't know. As far as reason can tell me, there is no good reason why objective morality *or* objective reality should exist. My hope is that there is a sensible explanation for it all, somewhere on the level from which you and I look like goldfish, and that there really is something called "objective reality" that determines the "correct" set of opinions, and that somewhere out there is something called "objective morality" that determines the "correct" set of choices.
In short, when I said it would take superintelligence to figure this all out, I meant it.
-- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.