Re: Putnam's kind of realism

Dan Fabulich (daniel.fabulich@yale.edu)
Wed, 3 Nov 1999 17:23:00 -0500 (EST)

'What is your name?' 'Eliezer S. Yudkowsky.' 'IT DOESN'T MATTER WHAT YOUR NAME IS!!!':

> It sounds to me like Putnam, or the person explaining Putnam, or
> someone, is failing to clearly distinguish between the question "What is
> truth?" and "What is rational?"

This isn't an error. Putnam explicitly argues for their equivalence. True statements are those statements which, objectively speaking, we OUGHT to believe.

Morality is objective in exactly the same sense and in exactly the same way that rationality is objective.

[If it helps, here where I say "morality" insert "meaning of life." Also assume that I'll use rationality and morality interchangeably. If you don't believe that rationality is morality, then change every instance of the word "rationality" in my argument to "morality" and re-read.]

I have many midterms due this week, so I'm not going to post on this again for a while, but...

> The truth precedes us, generated us, and acts according to laws of
> physics which we cannot specify. There is an objective answer to the
> question "What is truth?".

This seems an odd thing to say, since these two claims are directly incompatible. An "answer" is a special kind of statement. There is no such thing as a statement which cannot be stated. (If you think that there can be, then you're using a Very Different definition of statement than I am.) If it can't be specified, then it's not an answer.

Anyway, I hold that there is an objective answer to the question "what is truth," one which CAN be stated. I wholeheartedly agree that it preceded us, was the cause of our creation, etc. I also believe that the question "what is truth" is equivalent to the question "what should I believe?" I also believe that questions of morality, questions of "what should I do?" have objective answers. Thus, "what is truth" has an objective answer because "what should I believe" has an objective answer. Finally, "what should I believe" is exactly what I mean when I say "what is rational," so it ALSO has an objective answer.

> Rationality is the process whereby we attempt to arrive at the truth.
> There is no objective answer to the question "What is rational?", not
> *here*, not without direct access to objective reality.

If you mean that we simply don't have ACCESS to the objective answer to the question "what is rational," but that access is possible through thought/approximation, then we're probably in agreement. This is very different from saying that the answer does not exist.

> Rather, "How should rationality work?" is an engineering question
> about how to create systems that can model and manipulate reality - or
> rather, how to create parts of reality whose internal patterns mirror
> the whole, to the point that the internal process can predict the
> external processes in advance.

Here, again, we're in agreement. But what makes you think we have access to the answer to "how should rationality work?" That use of should drops this question in the realm of morality. This question should be as dimly lit (or as brightly lit) as the question "what is rational." (Indeed, under my account, you should have EXACTLY as much certainty about both of these questions.)

> As for AI, it seems to me that the concept of an internalist mental
> model is a confusion between "is" and "should". A reference to "green"
> *is* the cognitive concept of "green", but what it *should* be, what the
> system tries to make it converge to, is the external reality of green.

But the words "external reality" CANNOT REFER to external reality in the sense that you use it. It can refer to your own internal construct which you call "external reality," but you can't actually cook up a theory of reference that can make the connection from your words to the right thing under your conception.

Having said that, I again assert that positing the existence of an external reality is a very good idea, we objectively ought to do it, and, according to my theory of truth, I assert that it is TRUE that external reality exists, and that our words model it. We both agree that our beliefs SHOULD line up with external reality. However, I know what my words mean when I say that, HOW they refer to external reality; you do not appear to.

> If you have a wholly internalist system, then the concepts don't
> converge to anything. If the only definition of correctness is the
> thought itself, there's no way to correct malfunctions. The system is
> meta-unstable. The AI thinks "I can't possibly be wrong; anything I
> think is by definition correct," and then it gets silly, just like
> subjectivist humans.

Again, this is absolutely wrong. Our beliefs converge onto true beliefs. (Or, at least, they OUGHT to converge onto true beliefs!) No AI could rationally accept the claim that "I can't possibly be wrong." The claim is OBVIOUSLY irrational. The subjectivists buy it, but Putnam doesn't, and I sure don't.

Again:

"Internalism is not a facile relativism that says, 'Anything goes'. Denying that it makes sense to ask whether our concepts 'match' something totally uncontaminated by conceptualization is one thing; but to hold that every conceptual system is therefore just as good as every other would be something else. ... Internalism does not deny that there are experiential *inputs* to knowledge; knowledge is not a story with no constraints except *internal* coherence; but it does deny that there are any inputs *which are not themselves to some extent shaped by our concepts*, by the vocabulary we use to report and describe them, or any inputs *which admit of only one description, independent of all conceptual choices*."

-Dan

-unless you love someone-
-nothing else makes any sense-

e.e. cummings