Billy Brown, <bbrown@conemsco.com>, writes:
> Once you decide to look and see if there is an objective morality, you have
> (broadly speaking) three possible results:
>
> 1. You find external, objectively verifiable proof that some particular
> moral system is correct. Then you're pretty much stuck with following it.
Is this notion coherent? Does it make sense to speak of a proof that a moral system is correct?
For claim (1) above to be true, there must be a meta-moral system which selects a given moral system as the best one. But of course there are such, in fact there are nearly an infinite number of such meta-moral systems. How do we specify which one is best?
Do we then have to introduce a meta-meta moral system which will rank meta-moral systems?
I don't see how to ground this regress. It doesn't even seem to me that it makes sense to say that a particular ranking is objectively selected.
I'd like to see an example of an objectively-best moral system for a simple system. Consider a simple alife program, a simulated organism which interacts with others in a simulated world, reproducing and eating and trying to stay alive. Any algorithm it uses to decide what to do can be interpreted as a moral system. What would it mean for there to be an proof that a particular algorithm is morally correct?
Hal