I lost track of this thread when you changed the Subject line without
adding "was: MVT: all-conquering philosophy" I don't read everything
that comes through this list, especially over the holidays. :) Let's
try to restrict this discussion to just one or two Subject lines.
Steve Nichols wrote:
> S> OK. I also think that language is faulty and misleading in general,
> S> which I why I am starting to develop a word-free visual philosophy
> S> (some examples at www.extropia.net vis phil sector).
> >Being visual is not enough.
> Nor is being wordy ..... but philosophy does not allow itself
> to use pictures, but in my approach I can use any symbolic forms.
On the contrary: philosophers DO avail themselves of pictures, if they
think it will help them make their point clearer. Plenty of
philosophical articles have diagrams. However, once they insert their
diagrams, they use words to refer to the diagram, or make up new words
to refer to concepts which the diagrams describe.
So, again, I assert that the difference here is in the formality of
> >All the ordinary philosophical problems
> >can be translated into, for example, American Sign Language, with only
> >a few complications. After all, writing is visual. The difference is
> >in how much formal structure your language has, and how much is left
> >to be inferred from humanity and charity. On the formal end, you've
> >got the propositional calculus and Lojban. On the informal end,
> >you've got abstract dance, jazz,
> Purely visual & auditory surely ... we use our musical judgements not
> verbal ....
I'm not sure what you're trying to get across here. Sure, music is
different from words. (Though, when the words are spoken only, both
are "auditory.") But the relevant difference is their formality.
Certainly it would make no difference if I spelled out my words using
piano chords. (This would sound awful, of course, but...)
> >painting, and others. In the middle
> >towards the formal end you've got speech; ASL is a little less formal
> >than speech, and more dependent on "context" (ie charity and humanity)
> But I allow myself all the mediums ... so my analysis is more complete
> than one limited to linguistic analysis...
Again, we've already got it.
What does that completeness get you? Does it answer some
philosophical questions? Then write an article and use some diagrams.
This would work because all you'd be doing is writing in an invented
picture-language. In the past, when people have invented languages in
which to discuss philosophical problems, e.g. the propositional
calculus, we have used these languages in articles written in English.
These articles have been especially convincing by virtue of their use
of the invented language. So, if your picture-language does the same
for philosophy as symbolic logic does (which I doubt, but, who knows)
you should be able to write an article in English using diagrams that
helps us ordinary language philosophers out.
Does it create new philosophical questions which cannot be answered or
even named verbally? If your language is as formal as English or
symbolic logic, it probably cannot be done, since your langugage could
simply be structurally translated into English. German has words with
no single-word equivalent in English, but they may be structurally
analyzed and explained in English. We may even keep the German name,
or create a new English word to refer to the German referent. If your
picture-language is like that, then you haven't created something
radically new, expect possibly to shift our attention to a certain
class of questions that can be written down in English, but which
haven't been before.
Finally, your language may be considerably less formal than English.
But then, your language is so different from philosophy as we know it
that it becomes its own entity, distinct from philosophy entirely, as
jazz and abstract dance are to philosophy.
Now, conclusively settling the matter on an old philosophical question
and raising new ones that no one has thought of before IS quite
valuable. But, even so, one shouldn't be under the misconceptions
that the language raises questions that cannot be named in English, or
presents arguments which CANNOT be stated in modern philosophy
> But you can't have an analytic discussion about mind if you
> are a physicalist either ... and maybe you can have a purely
> diagrammatic discussion about the mind using pictures ...
Again, we physicalists can be anti-realists about minds. Then we
*can* have analytic discussions about "minds." (Henceforth, I'll try
to use scare quotes to keep you from making the fallacy I referred to
> >On philosophical grounds, then, I say that MVT doesn't seem very
> >elegant to me; that it doesn't even seem to be right on philosophical
> >terms, on account of the qualitative differences between "holes" (as
> >missing functionality) and feelings.
> Forget "holes" .. this was one of my arguments about your atomism.
> Substitute "abstract" or "absent" .. and consider that "symbols" are the
> medium of all types of mentation and that the pineal unitary sense organ
> has become symbolic rather than actual.
> Or does your type of physicalism deny symbols?
Well, my physicalism denies that symbols are anything more than their
physical part. e.g. Writing is nothing more than scratches on paper.
The words which I've sent to you are nothing more than bits on a disk,
signal down a wire, glowing phosphors on a screen, and are also stored
in the layout of our purely physical brains.
My physicalism denies that there is some non-physical meaning-object
which is out there somewhere, attached to the word. The word has a
use, an effect on ourselves and others, but nothing more.
> >Determinism doesn't rule out the possibility of history any more than
> >it demands fatalism; all I've described here is a bit of history of
> I repeat my question: "Are "principles" physical then? "
Yes. Principles are arrays of physical symbols, which may be stored
in the physical brain.
> I don't think you are using "metaprogram" in the accepted
> psychotherapeutic sense here, but are changing the idea to try
> and fit with your philosophical bias. Brains can not only change
> their "programming" but can reconfigure their own *hardware*
The brain can't *arbitrarily* change its hardware. It's obviously
restricted by the laws of physics, but it also suffers from many other
more restrictive limitations. The Turing machine has some obvious
limitations, but the fact that it can't change THOSE doesn't mean that
it's not just like a brain (in all the relevant ways).
More to the point, you say that the brain changes its "hardware." I
say that the very changes which you call changes in "hardware" are
really changes in "software." They're software because they CAN be
changed internally. The hardware is, by definition, the stuff that
has to be changed externally.
So, I'm not telling you that the brain's circuitry is analogous to the
circuitry of a Turing machine, but that the brain's hardware, the part
that cannot be changed internally, is equivalent to the Turing
> >Yes, but I don't share those feelings with you.
> Ah, so you adopt an *Intuitionist * stance here .. your words have
> given out so you appeal to feelings. I happen to think most
> philosophical so-called analysis does come down to feelings
> and aesthetics ...
Now, I want to remind you that I don't share those "feelings" with
you. I'm an anti-realist about feelings. We should reinterpret them
as purely physical.
But, once we do that, yes, I take it that this discussion, as all
other (philosophical?) discussions, relies inherently on "intuitions."
But that doesn't mean that the discussion ends here. I suspect that
we agree enough on our intuitions that by drawing attention to the
right sorts of facts we can come to agree on the same conclusion, or
at least clearly identify in what way our intuitions diverge.
> >I do think that they're all physical; that they share
> >that one property in common.
> Time is the *measurement* of movement in space .. therefore
> must be subjective because measurement is integral, not
> incidental to the concept of time. There is no "time" out there
> that you can point to ... it is just a concept ... not physical.
My incapacity to point to something doesn't mean that it's not
physical. But no matter. I'm not picky about whether time is merely
the measurement or whether there's something to measure. This has
nothing to do with the fact that holes have places, whereas feelings,
in the ordinary mental realist sense, don't.
> > I take it that the correct move in a
> >phenomenological scenario is to be an anti-realist about physics: to
> >say "when you say matter, you really mean such-and-such matter
> >qualia..." or "when you say physical, you really mean having
> >such-and-such sensations in common..."
> I won't use the qualia jargon, but yes, I largely agree with the above.
Well, I'll just adopt this phenomenologist strategy to answer you
point above, then. Time may be a measurement, but it's a "physical"
measurement, whatever "physical" means.
> >Anyway, the very fact that "holes" have a "location" by definition
> >whereas "feelings" don't, is enough to show that "holes" are
> >analytically different from "feelings," whatever other properties
> >these things have.
> Sure, except that the only knowledge you have of the hole is
> indirect, via your feelings/ perceptions/ logical reasoning and
> depth perception &c. The illusion of the hole has an illusionary
> location .. there isn't any "hole" independent of the illusion, or
> if there is it is impossible to know. Same with any so-called
> "physical objects" .. but MVT idealism explains how the illusion
> is produced. I don't need to deny the existence of physical objects,
> maybe they are there, or maybe we hallucinate them, who knows
> and who cares. The physical brain is necessary, but not sufficient
> for consciousness. A self-referential and abstract component is
> needed in order to interact with the world of symbols & abstraction.
Again, you cannot get away from this problem by using idealism,
because you still have all the same problems, slightly modified for
idealism. What's the connection between sensations-of-brains and
sadness? How can you solve the problem of other minds with idealism?
All you have access to is sensations of their behavior, and of their
brains. How can you conclude that they have a mind on that basis?
You might be taking a page from the anti-realist's book and say "I
cannot be wrong about there being a mind there, because the mind
itself is on the same ontological level as an illusion. If I believe
that there's a mind there, then there's an illusion of a mind there,
so there's a mind there, because a mind is just an illusion of a
But then I can make a similar move back in the physical realm: "I
can't be wrong in making belief statements about minds, because having
a mind is just the same as our 'believing' that there's a mind there."
(Again, scare quotes to remind you that I'm reinterpreting 'belief' to
be purely physical.) This, obviously, is just the Turing test: if it
"seems" to have a mind to us, then it does.
If you didn't find the Turing test satisfying, then you shouldn't find
your "mind-as-illusion" account satisfying. Both are "psychological,"
whatever we think that means, but neither of them hit the nail on the
head in conversation with a dualist.
> S> Anyway, you are not consistent saying that "we use our minds to
> S> demarcate" if your system does not include "minds" ... conscious
> S> organs.
> >We anti-realists get this a lot. It's completely fallacious.
> >Anti-realists about Xs assert that there aren't really Xs, AND that we
> >should reinterpret claims about Xs to be claims about Ys. So when
> >somebody says "The anti-realist contradicts himself when he says
> >such-and-such about Xs, yet maintains that there are no Xs!" they're
> >making a flat-out mistake. It is not wrong to talk about
> >consciousness or pain or other feelings; all I ask is that we remember
> >that these are handy terms for physical phenomena.
> I don't accept that "consciousness" is a handy term for a physical
> phenomena at all ..
Fine. But don't accuse me of contradicting myself, because when *I*
use the word "consciousness," I'm referring to physical
> and have an example that might prove my case.
> With chemical medicine you can establish causal links between
> relief of symptoms or cure of illness and the physical properties of
> a drug.
> However, all experiments ever done on the subject have shown
> that a "placebo effect" operates .... but the sugar pill has no
> chemical effects in the cure, only psychological.
No strictly *chemical* effects, but the "psychological" effects are
physical effects. The placebo has an indirect physical effect to cure
the patient, by making the patient "think" that they are being cured,
and, for example, relieving stress, and bolstering the immune system.
All physical all the time. The placebo effect *is* physical.
> >The way to avoid this mistake is to remember that it is *very* rare
> >that an intelligent person will assert an outright contradiction,
> >especially an intelligent philosopher; they're probably saying
> >something else similar.
> Disagree. The wisest men in history have made glaring errors
> and believed crass theories. Especially philosophers.
Look, when reading and interpreting what other people say, especially
people from other cultures, you have to employ principles of humanity
and charity. You have to assume that they acquire beliefs in more or
less the same way that you do, and that you and they are largely
correct in your beliefs.
Suppose I just tell you that "my gavagai is red," but you assume that
I'm wrong about that. How will you find out what my gavagai is? You
might come up with some reasonable guess about what I might be wrong
about, but you'll do much better if you assume I'm right and look for
some red thing that might be my gavagai.
Similarly, if you read those "crass theories" with an eye towards
interpreting them with charity, with interpreting them as if they're
saying something basically right, you'll not only have a more
interesting time at it, but you might learn something as well.
> But holes have no "substance" and are JUST their boundaries.
Who cares? They have a location, whereas the realist's "feelings" don't.
> >But forgive me if I slip back into such handy phrases as "Imagine that
> >..." "I feel differently ..." or "... they both feel the same." I'm
> >being an anti-realist about these phrases. We both get to say them,
> >but we get to say them for different reasons.
> Yes, but when I say them I mean what I say, and when you say them
> they are packed together with a unwieldy bunch of provisos &c. in
> that you should really qualify each statement before using it.
No. I take the stronger view that my definition is right, and that
yours is wrong. It's you, I argue, who should qualify your beliefs
about feelings to point out that you don't just mean the physical
stuff, but something MORE. But until we agree about this, we'll just
have to interpret what the other says with charity and humanity.
> There is less work for the hypnotist .. but yes, works either case.
> The point I am making is for the primacy of the mental/ conscious.
> If I suggested to you under somnambulistic trance that I was burning
> a cigarette stub on your hand, not only would you experience it,
> but might well manifest burn marks! (See placebo effect argument).
Still, it would all be physical. Physical words would affect my
physical brain, which would affect my physical body.
> >I make truth claims because I'm following my intuitions, and my
> Where is this program located? Or do you just mean cultural
> conditioning and things you have learnt or been told?
In the brain. It is encoded in the layout of the neurons and their
connections, just as computer code is encoded in the layout of
transistors on a chip. The program is there. It can be changed
externally, by culture, experience, etc. but the program is there,
just as a robot's program is on its hard disk.
> >Do I make justified truth claims? I think I do, and that
> >has to be pretty much good enough for me.
> Are you conscious of making these justified truth claims?
Yes, I'm "conscious" of doing so.
> >Nobody's arguing that nature has actually designed a tape-reader, or
> >that the human brain has any tape. But nature might be stuck using a
> >million tape-reader-equivalents instead of something better because
> >Turing machine equivalents (the collection of which, in turn, is one
> >big Turing machine equivalent) are all that are physically available.
> You are wrong in fact here ... neurons are mini-transputers, more like
> transistors than Turing machine heads. And they flock together to work
> on particular problems. Talk of Turing machines is positively misleading.
Misleading in a description of state, but not at all in description of
functions, limitations, capacity, etc.
> >Your anti-functionalist view suffers from all the same problems as
> >functionalism; all the same objections. Maybe it has all the same
> >intuitive support. Certainly it's worth noting that missing
> >pineal-eye functionality seems to cause positive physical
> >functionality like the kind Putnam was interested in. That's what led
> >us to conclude that the pineal eye was interesting: missing that
> >physical functionality seemed to grant us another functionality:
> >intelligent behavior. But we can't really use this as a solution for
> >the mind/body problem, or even as intuitive support for a solution,
> >because you'd just be arguing for functionalism: you'd be using
> >functionalism to argue that you'd solved the mind/body problem.
> I accept some of this ... but still feel it is worth exploring which
> types of problems can be cleared up by MVT .. and even if it
> provides a better analogy than existing ones, this is of value. I do
> think it answers the Leibnitz Law objection to Descartes, and on
> this (the original formulation of the mind-body problem) point I do
> think it overcomes Leibnitz objection .... which was that the pineal
> GLAND, or any other physical part of the brain by extension, cannot
> interact with "psychological" events which are not physical.
It only answers Leibnitz as well as functionalism does. But if you
understood the objections against functionalism, you'll see that you
still haven't captured what mental realists want to capture about
"feelings." To have certain physical capacities is not to have a
feeling. It's easy to imagine someone with (or without) those
capacities who has or doesn't have feelings. They're different in
definition, in principle. And you haven't explained the link.
> The abstract, phantom pineal eye is not physical in the same way that
> the pineal gland is physical: thus it could interact ....
Sure, not physical the same way capacities are not physical. But
that's not enough.
> >Because the Turing-Church thesis tells us of what we human beings are
> >capable (namely, nothing more than of what a bunch of Turing machines
> >are capable, and therefore nothing more than of what one great big
> >fast Turing machine is capable) and what can possibly act like us.
> I think you are making an unjustified leap to conclusion here. An
> malefic demon is capable of acting exactly like us, or maybe
> if we had your fairy physics that could explain everything ... but in
> fact what we have are brains that are massively parallel distributed
> systems, and not any of the other things. We have the solution already,
> so why do philosophers continue to look for worse solutions?
The Turing analogy tells us what we can and cannot do. That's the
use. We can't do anything more or less than what one big fast Turing
machine can do.
> >They WOULD behave the same way, though
> >one would do so much much slower (but that's all).
> NO ... I won't allow this. Turing machines do not reconfigure their
> circuitry, they do not compute in parallel, and I do not think it
> possible for them to be instantiated biologically ...
They can absolutely be instantiated biologically. A few years ago
people were toying around with DNA computers that could perform
Boolean operations encoded in DNA. The whole thing was DNA only. The
tape was there as DNA, the head was there as polymerase, the whole
deal. This was literally a biological Turing machine.
But as for Turing equivalents, computing in parallel is still
equivalent to a big fast Turing machine (that is, equivalent in the
way they behave, which is what I said above). The brain can
reconfigure some "hardware," but there is other hardware that the
brain cannot reconfigure. That hardware is equivalent to the hardware
that the Turing machine cannot reconfigure. Everything the brain can
reconfigure can be simulated on a big fast Turing machine.
> Furthermore, and the key point, they aren't CONSCIOUS but
> we are ... our neural wetware is capable of manifesting non-physical
> components and the associated phantom illusions ... do you claim
> that Turing machines could have, then lose but remember, a pineal
> eye (or any other body part come to that). I think not.
When we design a functional human-equivalent AI, it will obviously be
a Turing equivalent. Will you be in the camp of philosophers who will
insist that it still cannot be conscious?
-unless you love someone-
-nothing else makes any sense-
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT