Thank you for your thoughtful, patient, and articulate response Lee. I'm having a difficult time with this concept and I appreciate your efforts to enlighten.
Lee Daniel Croker wrote:
>> How did you determine that is your best course? What exactly
>> is wrong with a rational, functional, utilitarian morality?
>"Rational", "functional", and "utilitarian" are value judgments
>as well, so there might be something very "wrong" with them, or
>there might not be. Or there might be no way to judge.
Probably the latter.
>Our epistemologies have improved over the years to the point
>where many of us are quite willing to express confidence in and
>make a personal /commitment/ to the "reality" of propositions
>about the world. I plant crops, confident the Sun will rise
>tomorrow to nourish them; I treat my infection with antibiotics
>rather than leeches; I refrain from filling my gas tank with
>milk, confident that my understanding of combustion justifies
>that decision. We further have confidence that any two
>sufficiently intelligent beings will reach the same conclusions
>about nature; we call this "objective reality".
>Is there any reason to suspect that our moral philosophies will
>not also continue to improve as our natural philosophies have?
>Is there some barrier that will prevent future intelligences
>from having as much confidence in their choice of action as I
>have now in my descriptions of nature? Is there some reason
>that two intelligences /cannot/ of necessity reach the same
>moral conclusion about the same circumstances? Might it not
>be the case that our moral epistemologies will evolve to the
>point where I can bet my life that any other intelligent being
>will reach the same moral conclusion as I, just as I would bet
>it on eir reaching the same physical conclusion as I? I would
>call that state of affairs "objective morality".
It sounds like you're saying that if enough people agree on a moral system, it can be considered to be an objective reality. If everyone on Earth agreed that the Sun was warm, I would withhold judgement until unbiased instrumentation repeatedly confirmed the hypothesis. I would then have high confidence, as I assume you would. The same goes for the rest of objective reality. How are morals to be measured? Is broad consensus across species really an acceptable criterion? What if the 10,000th species we run into disagrees? Do we simply conclude that they're "unevolved"?
>Objective reality is a powerful concept that allows humans to
>do miraculous things, like building bridges that don't collapse
>and cars that run. I do not yet have confidence that there
>exists an objective morality, but the potential it would hold
>for allowing us to do even more miraculous things demands that
>I seek it, for the same reason I must seek objective reality.
>Just because I can't see it is no reason to abandon the search.
Please explain. What leads you to believe that an "objective" morality is more powerful than a non-objective (perhaps rational, functional, utilitarian) morality? What advantage does objectivity offer? Is it not more parsimonious to hypothesize that moral systems are a function of evolution and culture? Would you suggest that moral systems exist without sentience? Are they objective in that sense? If the existence of a moral system relies upon our contemplations of it, how can it be objective? I suspect that the laws of natural selection are at work wherever sentience arises, but the cultural differences will most likely result in differing moral systems from species to species.
Here's a question. Would you expect an SI from Earth to have the "exact" same moral system as an SI from Vega?
>Until then, I also see value in behaving /as if/ there is such
>a thing, and making a personal commitment to behaving in a
>manner consistent with my current best guess as to what it is,
>because I have no better moral epistemology to use...yet.
My guess is that your moral system and mine are highly congruent. I also suspect that alien intelligences have constructed similar moral systems to ours. None of that, in my mind, necessarily lends credence to the notion of an objective morality.
But perhaps our definitions are misconstrued. I suspect that moral systems between highly evolved intelligences will be highly congruent due to the laws of natural selection and the laws of social psychology. If this is the case, does that make it objective?