Alejandro Dubrovsky wrote:
> I'm assuming that what you are doing is trying to maximize the value of
> the system. I don't see, though how you can assume that the goals' values
> are positive. ie "Either life has meaning or it doesn't, but i don't see
> any way of knowing if the discovery of the meaning of life is good or bad"
There's no way of knowing whether life is good or bad, but it is unqualifiedly good that a superintelligence find out which is the case and act on it. In either case, the correct value is served.
> > Logical Assumptions:
> > LA1. Questions of morality have real answers - that is, unique,
> > observer-independent answers external from our opinions.
> > Justification: If ~LA1, I can do whatever I want and there will be no true
> > reason why I am wrong; and what I want is to behave as if LA1 - thus making my
> > behavior rational regardless of the probability assigned to LA1.
> I disagree. If there are multiple, observer-dependent answers, then
> there's still a morality system that affects you and you could still be in
> the wrong, and this situation would fall into ~LA1.
True. Perhaps I should say opinion-independent rather than observer-dependent. Given the number of strange games physics plays with observers, this is certainly possible. But evolution is absolutely dependent on the observing gene, since it's a differential competition; and unless I see some excellent evidence, I'm not going to seriously consider the possibility that our particular type of observer-independence translates exactly into reality. What I mean by observer-independence is really more like "independent of our brand of observer-dependence". There are various interesting scenarios here. But I don't see an observer-dependence scenario of plausibility comparable to observer-independence which contributes a large anti-Singularity factor.
> And even if LA1, i don't see how BEHAVING as if LA1 is more rational than
> if behaving as if ~LA1. As in John Clark's email about Pascal's wager,
> the rational way to behave if LA1 might be to behave as if ~LA1, depending
> the nature of the real moral answers.
I agree completely. Behaving as if the Singularity overrules all other moral considerations and the end justifies the means is likely to get you slapped down and tossed in prison. One must respect the observer-dependent wishes of others to get along.
> My rejection of LA1 makes me unfit (by your conclusion, with which i
> mostly agree with) to argue rationally about morality, and i suppose my
> claim (like many others') is that you cannot argue rationally about
> morality since LA1 seems very weak.
Would you agree that LA1 is either true or false? (Call this meta-LA1.) We might not be able to debate morality, but we can certainly debate LA1, which is what we are doing. For that matter, if you agree to a weaker version of LA1 in which some things are "wrong" even if what's "right" is observer-dependent; or if you have a proposed observer-dependent (but non-arbitrary) system in which logic is subject to disproof; we can still have a rational debate.
The key requirement is that there should be some method of disproof.
-- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.