Steve Nichols wrote:
> S> OK, do you take a Rylean view that "mind" is a category mistake and only
> S> seems a thing because language exists that describes it as an 'object?'
>
> >No, but I reserve "category error" to refer to a very particular kind
> >of logical error. I think the notion of a "mind" and "mental events"
> >makes logical sense, that there's nothing logically contradictory
> >about them. I think one can be a consistent epiphenomenalist.
>
> So you reject Ryle's general approach, which is behaviourist
> in much the same way that yours seems to be?
No, I agree with Ryle with the notion that "consciousness" is a kind
of mistake if taken literally, though not a "category error." Calling
Ryle a "behaviourist" is to throw him in a box with a ridiculously
wide range of people, including Skinner. Better to call us
"physicalists", since I disagree with most behaviorists, but agree
with some others.
> >To the extent that I agree with Ryle, I take him to be telling a
> >plausible story about what led us to conclude that consciousness is
> >really out there, that we shouldn't reinterpret "consciousness" to
> >refer to non-mental entities.
>
> OK. I also think that language is faulty and misleading in general,
> which I why I am starting to develop a word-free visual philosophy
> (some examples at www.extropia.net vis phil sector).
Being visual is not enough. All the ordinary philosophical problems
can be translated into, for example, American Sign Language, with only
a few complications. After all, writing is visual. The difference is
in how much formal structure your language has, and how much is left
to be inferred from humanity and charity. On the formal end, you've
got the propositional calculus and Lojban. On the informal end,
you've got abstract dance, jazz, painting, and others. In the middle
towards the formal end you've got speech; ASL is a little less formal
than speech, and more dependent on "context" (ie charity and humanity)
Philosophical analysis is something worth doing; if philosophical
issues don't appear at all in some more-informal visual language, then
that just tells me that you can't do philosophy in that language. If
translating problems into a certain language makes a certain answer
appear obvious, then that may be a compelling argument for that
answer, as we currently take to be the case with the propositional
calculus. But just as you can't have an analytic discussion about
mind in the language of jazz, (no, you can't, [no, you can't,]) that
doesn't mean that we should stop playing/listening to jazz or stop
having analytic discussions about the metaphysics of mind.
> Yes, a shorter or more elegant proof in maths is better than one
> using more terms. I claim MVT is the simplest and most elegant
> account. MVT also explains more phenomena, and reduces several
> other theories in science (and philosophy) to a more basic account.
On philosophical grounds, then, I say that MVT doesn't seem very
elegant to me; that it doesn't even seem to be right on philosophical
terms, on account of the qualitative differences between "holes" (as
missing functionality) and feelings.
> >After the fact, we now regard these to be normative
> >scientific values, principles which guide our scientific beliefs.
>
> Are "principles" physical then? Otherwise your determinist position
> starts to come apart ... What about "desires" ... didn't you say that
> our beliefs only come from desires (or was it vice-versa)?
We change parts of our programming from time to time, but when we do,
we follow a metaprogram. Metaprograms change from time to time, but
when they do, it happens under a metametaprogram. And so on.
Determinism doesn't rule out the possibility of history any more than
it demands fatalism; all I've described here is a bit of history of
science.
> That may be true if words are your ONLY medium of communication.
> The "questioning" idea might not be problematic if we commune using
> touch signals, or visual display predominantly. "Questioning" might
> translate to a "feeling of puzzlement" of a particular grain. I might
> want to say that we can rely on and trust our *feelings* (that MVT
> is the true account) more than lots of us currently do. Or I might
> appeal to pattern completion and recognition, and suggest that our
> judgement be based on a satisfactory visualisation of the matter.
Yes, but I don't share those feelings with you.
> S> The category of a particular area of space as a "hole" is indeed
> S> psychological ... the hole can only be demarked independently of its
> S> (physical) substrate by the naming of it (seeing it, pattern completion,
> S> and formulating it as a linguistic/ conceptual entity). It has no
> "matter"
> S> so isn't physical ..... but is observable ... an observer-related effect.
>
> >Yes, but we must be careful not to overload the term "psychological."
> >Feelings are psychological, and so are "holes," but they're
> >psychological in very different ways. Sadness, for example, has no
> >physical place at all, by virtue of its purely mental character.
>
> I don't disagree with this. We have infinite-state potential to experience
> any feeling imaginable.
I'm not sure what your response had to do with my original claim. I
just meant that feelings have no location. Finite-state and
infinite-state doesn't seem to enter into that question.
And being infinite-state certainly isn't an INTUITION I have. What
would it feel like to be very-many-state? Would that feel differently
from being infinite-state?
You might say "it doesn't feel like anything to be finite-state," but
I certainly don't see why *that* has to be true. Why couldn't you
have one-state or two-state illusions? Why not twenty? Or twenty
million? Why only infinite?
Can you imagine checking on this, even internally? You'd have one
imagining after another, again and again, until... what? You halted?
Until you didn't halt? Until you gave up?
> >Holes are physical bits of space (or functions/components, ...) that
> >we use our minds to demarcate.
>
> Let me understand this correctly ... you claim that "space" or "void"
> is physical ... although it just has location and not any matter/ physical
> stuff? I think "location" is a dimension, much like "time" .. and as such
> isn't necessarily (or maybe isn't either probably, or even *possibly*)
> physical. Matter, time and space are assigned as different variants
> in physics ... so can these things be identical?
I don't follow you. I'd never said that matter, time and space are
identical. I do think that they're all physical; that they share
that one property in common. I take it that the correct move in a
phenomenological scenario is to be an anti-realist about physics: to
say "when you say matter, you really mean such-and-such matter
qualia..." or "when you say physical, you really mean having
such-and-such sensations in common..."
Anyway, the very fact that "holes" have a "location" by definition
whereas "feelings" don't, is enough to show that "holes" are
analytically different from "feelings," whatever other properties
these things have.
> I include "possibly" because of the possible that all is mental, and
> space, physical stuff, atoms and all the rest of it are just
> concepts combined with conscious perceptions (real virtuality in
> MVT-terms) plus self-generated imagination.
Again, this doesn't bother me. Even in a virtual reality, you have to
wonder how your sensations-of-brains are connected to other minds.
> Anyway, you are not consistent saying that "we use our minds to
> demarcate" if your system does not include "minds" ... conscious
> organs.
We anti-realists get this a lot. It's completely fallacious.
Anti-realists about Xs assert that there aren't really Xs, AND that we
should reinterpret claims about Xs to be claims about Ys. So when
somebody says "The anti-realist contradicts himself when he says
such-and-such about Xs, yet maintains that there are no Xs!" they're
making a flat-out mistake. It is not wrong to talk about
consciousness or pain or other feelings; all I ask is that we remember
that these are handy terms for physical phenomena.
The way to avoid this mistake is to remember that it is *very* rare
that an intelligent person will assert an outright contradiction,
especially an intelligent philosopher; they're probably saying
something else similar.
In this case, I could have said that "holes are psychological only in
that we use our BRAINS to demarcate their boundaries" without any loss
of meaning. Indeed, I obviously should have said this: it would have been
clearer to you.
But forgive me if I slip back into such handy phrases as "Imagine that
..." "I feel differently ..." or "... they both feel the same." I'm
being an anti-realist about these phrases. We both get to say them,
but we get to say them for different reasons.
> >Well, yes. It's a philosophical problem which we resolve by pondering
> >our intuitions on the matter, or which we refuse to resolve, since
> >it's pointless.
>
> Why don't you do your Senior thesis (?) on MVT ... much less boring.
> I have a mass of publications on it.
Well, it's because the science part isn't philosophy, and I don't
particularly agree with the philosophy part.
> S> Your "consciousness isn't real" posture just doesn't cut it in workaday
> S> reality.
>
> >It works just as well as its opposing view, so long as one
> >pro-actively reinterprets consciousness claims into something else.
> >"When you say you're in pain, you mean that your E fibers are firing,
> >and that's a bad thing; better get you some ibuprofen." In practice,
> >we simply don't decide whether to re-interpret consciousness claims or
> >not, since their reinterpretation does not affect our decisions about
> >what to do about "pain," whether pain is mental or purely physical.
>
> But our senses, through which we interpret the world .. and our common
> sense, or "how we" combine and interpret sense data, can recognise
> the "feel" experience of *pain* because we KNOW what it is like ourselves.
> A torturer can soon change someone's mind (though not as elegantly as
> my new system of hypnosis arguably!)
Sure. Just be sure that you re-interpret all of those claims into
physical ones (when you intend to speak formally), and I agree with
all of that.
Since your theory and anti-realism agree on truth values, and since
you don't actually have to re-interpret claims on the spot (or ever,
if you don't intend to speak formally) they're equally useful for
workaday practice.
> Yes, so are there some truth or empirical grounds which reinforce
> even the hypnosis, or make the hypnosis easier to take if experience
> is in agreement?
Well, it doesn't matter to me whether there's a "truth" or not. I'll
go about acting like my intuitions are pretty well (but not perfectly)
in touch with it, either way.
I'd assume that hypnosis is harder to the extent that it disagrees with
my currently active beliefs/desires, but that there's nothing else
helping or hindering the matter. Hypnotizing me to believe the truth
when I strongly believe a falsehood should be no harder than
hypnotizing me to believe a falsehood when I believe the truth, all
else being equal (which it so rarely is).
> And given that we could already be in trance or working on
> post-hypnotic suggestions, how do you make any truth claims for your
> own beliefs in determinism, epiphenobblyism and all that Turing
> machine rubbish?
I make truth claims because I'm following my intuitions, and my
program. Do I make justified truth claims? I think I do, and that
has to be pretty much good enough for me.
> S> No Turing machines of the sort you discuss have ever, not will ever,be
> S> built.
>
> >.Perhaps not, but only on account of their being too cumbersome, slow,
> >expensive, and hard to build, not by virtue of their being different
> >in principle.
>
> Yes, or in other words the idea of building them is daft! So why would
> evolution (infinitely pragmatic) have used such a daft design?
Nobody's arguing that nature has actually designed a tape-reader, or
that the human brain has any tape. But nature might be stuck using a
million tape-reader-equivalents instead of something better because
Turing machine equivalents (the collection of which, in turn, is one
big Turing machine equivalent) are all that are physically available.
> S> The SHAPE of the brain effects its function, the shape evolved, and
> S> (gestalt) can be described in terms of foreground OR background
> S> (holes), or both.
>
> >But holes are different from sadness; different in kind and in
> >principle. They may be linked, but they're not the same.
>
> No, you continue to misunderstand that "holes" are an analogy that I
> use to point out some shared attributes with the "absent" or
> "abstract" pineal eye .... but I by no means hold that this analogy
> is the whole story.
Sure, I see where you're going here. To say that something is
"missing a certain function" is just to state counterfactually that,
if you had a pineal eye, you'd act in such-and-such a way. It is to
describe a certain possible state-of-affairs in which you had a pineal
eye, and you acted that way. But that "possible state-of-affairs" is
just an idea, a non-physical if not purely mental object. So "missing
functions" are psychological just like possibilities are.
But still we're conflating "psychological" here, because even
possibilities, and thereby missing functions, are different in kind
from feelings, sensations, and other qualia. A feeling is not a
possibility, not even a mental representation of a possibility.
One common brand of anti-realism about consciousness is
"functionalism," the view that "being conscious" is really nothing
more than the body's CAPACITY to have certain kinds of behavior, or to
be in a certain physical state. Hilary Putnam, who, I believe,
originally coined the term, emphasized that one of functionalism's
most important characteristics was that it maintained that mental
properties were non-physical properties: although "capacities" are
properties of physical objects, the functionality itself was
non-physical: it was merely a statement of possibility, or, in
philosophical jargon, it was a modal property of a physical thing, a
way a thing could be.
But this solution does not solve the mind/body problem, as even Putnam
himself now admits. Because to say that you feel sad is not to say
anything about the way your body is OR about the way that your body
could possibly be. All our old thought experiments come back. It's
easy to imagine someone who had all of the functionality of a person
but doesn't have any feelings. There's nothing about the (ordinary,
non-functionalist) definition of "feelings" implying that someone
who COULD ACT in a certain way feels in a certain way.
Indeed, there are obvious counterexamples: actors act sad, but they
aren't sad. A more refined functionalism can get around this
simplistic counterexample, but still suffers from the same old
objections: this is all talk about the PHYSICAL. It's talk about
one's PHYSICAL functionality. The mental is something more; something
else. And functionalism provides us with no explanation for how or
why modal properties of physical things cause or imply mental
properties.
Why this little parable about functionalism? Because your view, that
being conscious just is to miss the functionality of the pineal eye,
and nothing more, amounts to a kind of anti-functionalism: it is to
say that being conscious means to have a MISSING physical
functionality, rather than a present physical functionality, as Putnam
had argued. It's a modal property of a physical object either way.
Your anti-functionalist view suffers from all the same problems as
functionalism; all the same objections. Maybe it has all the same
intuitive support. Certainly it's worth noting that missing
pineal-eye functionality seems to cause positive physical
functionality like the kind Putnam was interested in. That's what led
us to conclude that the pineal eye was interesting: missing that
physical functionality seemed to grant us another functionality:
intelligent behavior. But we can't really use this as a solution for
the mind/body problem, or even as intuitive support for a solution,
because you'd just be arguing for functionalism: you'd be using
functionalism to argue that you'd solved the mind/body problem.
Modal properties are not mental properties. "Being sad" says
something about the way my body is, but it says more than that. "Being
conscious" says something about the way my body could be, but it says
more than that. But unless we turn into dualists, (and probably
epiphenomenalists, at that,) we may never be able to quite put our
finger on what that might be.
So... even if we put scare quotes around "holes" and remind ourselves
that we mean missing components/functions, still "holes" are
analytically different from feelings. Functionality, even missing
functionality, is not enough to convert the dualists. They want more.
I argue that it's more than scientists can even give them. But they
want more, all the same.
> >Well, of course, there is a weak version of free-will, according to
> >which we have free will if only we can make decisions independently of
> >our genetics and our fellow man, but NOT necessarily independent of,
> >say, the state of the Earth yesterday or last week or last year. I
> >must agree with that weaker version.
>
> I will reciprocally agree with a weaker version of determinism: after
> all we cannot step outside this Universe (at least very easily!) and I
> accept that we do not have free will in all things. Perhaps by combining
> these weaker versions we can stop this tired old Free Will debate?
Probably not, so long as philosophers assert that we CAN make
decisions wholly independent from how we were last week or laster
year, and so long as philosophers argue that the difference between us
a million Turing machines (or one big fast one) is that one is
free-willed and the other is not.
In the weak sense we just defined, a Turing machine CAN be
free-willed, CAN make its decisions independently of what other people
"tell" it to do. Highly self-modifying but deterministic Turing
programs can and regularly do surprise their creators. Nobody told
Deep Blue to move the queen there. No one knew that it would; not
even it, until it found out that it "desired" to do that. (Scare
quotes again.)
> >I never said that they did it in real time... they do it considerably
> >slower than that. But doing it at all is interesting.
>
> NO ... why go with a less good analogy, a less efficient proof,
> and with a daft imaginary (Turing) architecture when you would
> learn far more looking at neural (silicon) computation ... which
> sculpt themselves from experience rather than running code ...
>
> The von Neumann and Turing analogies as to how a brain works
> are of as much use and out of date as the 1920's analogies between
> the brain and a telephone exchange.
Because the Turing-Church thesis tells us of what we human beings are
capable (namely, nothing more than of what a bunch of Turing machines
are capable, and therefore nothing more than of what one great big
fast Turing machine is capable) and what can possibly act like us.
> >In
> >particular, if you had a slow person who still had REM, still claimed
> >to have dreams and told me about them, still told me how they felt,
> >still talked to me like an ordinary slow person, I'd be drawn to
> >conclude that they were just like me, only slower, including having
> >consciousness, only slower, assuming I have it in the first place.
>
> This isn't a case of just "equivalent but slower" .... they would also be
> working in a completely different way from you. Neural computers
> are reverse engineered from brain circuits, so are a lot more
> convincing ... this ISN'T just an aesthetic matter either ... the neural
> computational model of the brain is BETTER than the older serial
> computational models ... sure a load of sluggard academics still use it
> but they are due for reduction by MVT pretty soon, I hope.
But the fact that they'd do the same things at the end of a very long
day, I insist, IS interesting. They WOULD behave the same way, though
one would do so much much slower (but that's all).
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:43 MDT