At 10:26 PM 25/11/99 -0600, Eli wrote, as he frequently does:
>of objective morality. Or rather, making "intelligence" the grounding
>point until such time as an instantiation of objective morality is
>discovered, at which point the intelligent thing to do may be to switch
>to an objectively based system.
>the more I think
>about it, the more I wonder if I'm asking the right questions.
Not before time, Grasshopper.
It seems to me that your ambition of finding an `objective or external morality' is based on a bafflingly simple error. You are seeking a teleological answer in a non-teleological cosmic substrate. You are asking after the ice-cream preferences of a dust cloud. Morality is an emergent program for guiding the behaviour of complex social beings. Such beings are not written ahead of time into the foundations of the universe. Any moral code is therefore prudential at best - that is, it is a set of ranked instructions for how to attain goals that have been set arbitrarily in a complex cascade of adaptations and evolutionary kluges. These can certainly be tested by various criteria of effectiveness, but not against any deeper, universal, aboriginal dicta that preceded the (very recent) emergence of minds. Deal with it. Stop looking for some rule in M-Theory that tells you why choosing not to eat bluebellied flies on Tuesdays is objectively morally righteous.