Rationality, Miracles and ETI

Dan Fabulich (daniel.fabulich@yale.edu)
Tue, 2 Nov 1999 17:19:13 -0500 (EST)

'What is your name?' 'Eliezer S. Yudkowsky.' 'IT DOESN'T MATTER WHAT YOUR NAME IS!!!':

> Maybe people who are already silly to begin with can't accept the idea
> that this whole Universe might be a computer simulation run by
> interventionist sysops without losing all track of reality, but as far
> as I'm concerned, in the event that the Matrix Hypothesis was proven,
> I'd simply go on thinking the same way as always. If I don't believe in
> miracle X, it's because miracle X is easier to explain by reference to
> human legend then by reference to intervention, not because intervention
> is absolutely impossible. Think relative probabilities, not proof.
> Only weak minds demand certainty.

I recently started reading a book called _Reason, Truth and History_ by Hilary Putnam. (I highly advocate this book to everyone, though it is quite difficult.) He argues in favor of an "internalist" or "anti-realist" theory of truth, rather than a correspondence theory of truth; I think his argument has relevance here.

Under a correspondence theory of truth, a given sentence is true, say "snow is white," on the condition that there is real snow, out there in the real world, and that the snow really exhibits the property of being white.

Under an internalist theory of truth, however, we merely generate models (such as scientific models) which we use to attempt to explain/predict our experiences. In this sense, we *posit* the existence of a real world, snow, whiteness, etc. in an attempt to explain/predict. So when is a claim true? Well, under internalism, all we have is the rational acceptability of the claims presented to us; thus, truth is simply rational acceptability under ideal epistemic conditions. "Snow is white" is true iff, under ideal epistemic conditions, we would find that the claim that "snow is white" is rationally acceptable.

On account of this, there can be more than one true explanation of phenomena, because both are equally rationally acceptable. More relevant to this conversation, however, is this: if a given claim would NOT be rationally acceptable under ideal epistemic conditions, then the claim is false. The important result here is in Eliezer's claim above, that even if the Matrix Hypothesis were somehow "proven," he'd still use human legend rather than intervention to explain/predict. (Me, too!) In Putnam's terminology, Eliezer would NOT rationally accept a Matrix Hypothesis, even under ideal epistemic conditions. Therefore, since truth just IS rational acceptability under ideal epistemic conditions, the claim is false, full stop.

Putnam's argument is quite convincing to my mind. His argument stems from the philosophy of language, showing that there is no way for a correspondence theorist to give an account of how words in a sentence *refer* to specific real objects out in the real world, because there are too many possible correspondences between words and object. To *specify* one particular correspondence would require one to already have the capacity to REFER to a particular correspondence. Since we were trying to give an account of how reference happens to begin with, we cannot invoke reference in the process. Without a working theory of reference, the correspondence theory of truth becomes nonsense.

(Note that not even the intentions of the author/speaker can do the work here, since the correspondence theorist cannot show how the thoughts of the author can be ABOUT the things in the real world [that is, how the author's thoughts *refer* to the real world!] without invoking reference in order to do so.)

In contrast, internalism has none of these problems, since the objects in question were POSITED to exist in the first place. For internalism, "snow" just refers to snow; since we invented "snow" as a theoretical construct to begin with, that's all you can say on the matter, in the same sense as the character "7" just refers to the number seven; since we invented seven to begin with, 7 can't fail to refer to the correct thing.

If the correspondence theory of truth fails, as I believe it does, and all we have left available to us is rational acceptability, then Power-intervention may be false in the only notion of true/false that makes sense.

-Dan

PS for AI programmers

Eliezer, even if anti-realism isn't the sort of philosophy you'd like to adopt generally, it nonetheless seems to me that this is exactly the sort of way that an AI should think about problems: in terms of models which explain/predict, rather than in terms of reference/correspondence to the way things "really are." Having checked out your website a very long time ago, it seems to me that your approach is similar to this, but all too often it invokes metaphysical realism to "settle" various questions.

(Take care that you don't interpret my position as being that of the idea that our "experiences" are all that there "really is," that our models must correspond to our experiences. Even the similarity/dissimilarity of our experiences are posited/constructed by us, though not just any old conceptual scheme is rationally acceptable.)

If I'm right, then this appeal to metaphysical realism will get your AI in big trouble if it ever starts wondering about the problem of aboutness/reference and doesn't stumble upon anti-realism. If it concludes that its own thoughts have NO aboutness, it may shut down and go quiet. Starting with anti-realism (and then allowing for the rational possibility of rejecting it later, as should be the case for all of your initial instructions) may avert that possibility before it happens.

-Dan

-unless you love someone-
-nothing else makes any sense-

e.e. cummings