Re: Placebo effect not physical

From: Dan Fabulich (
Date: Thu Jan 04 2001 - 17:36:51 MST

Steve Nichols wrote:

> >What does that completeness get you? Does it answer some
> >philosophical questions? Then write an article and use some diagrams.
> I prefer to make films ... music, pictures & animation plus narrative &
> discussion. Interactive software also .... not boring linear articles.

If you like. These are usually considerably less formal, but, hey,
whatever sells your point.

> >Finally, your language may be considerably less formal than English.
> >But then, your language is so different from philosophy as we know it
> >that it becomes its own entity, distinct from philosophy entirely, as
> >jazz and abstract dance are to philosophy.
> Have cut the irrelevant stuff about artificial languages .... philosophy
> already consists of this .. a barrier to keep non-philosophers out.

Yeah. "Why can't I, a world-reknowned jazz musician, be considered a
sculptor? Sculpture is nothing more than a barrier to keep
'non-sculptors' out."

> I am going to argue that your whole case relies on DEFINING
> everything as "physical" and not allowing anything to exist outside
> this definition ... you can call it anti-realism or whatever, but
> your argument is completely vacuous and circular. If "Principles"
> and "symbols" and "time" are physical (ha, what rubbish) then at
> least you have to accept they are not "physical" in the same sense
> of the word as rocks or brains are "physical" ... so what value has
> this catch-all word "physical" ... none at all.

No, it has some uses. It has an intension as well as an extension.
"Physical" doesn't just describe the set of things which do exist, but
the set of things which could possibly exist. It supports
counterfactuals, as they say.

More to the point, it seems to be in opposition to theories like
yours. If "physical" meant nothing, or everything, you should be able
to show that, under my own definition of "physical," your theory is
physical. But you can't do that. You theory inherently relies on
non-physical elements. (Yet you try to claim the scientific
high-ground and resort to name-calling with cries of "medievalism.")

> >Well, my physicalism denies that symbols are anything more than their
> >physical part. e.g. Writing is nothing more than scratches on paper.
> So all scratches on paper ARE writing? Ridiculous!

Forgive me. Scratches on paper that are USED in a certain way by
people. No non-physical component, unless you consider their use as a
kind of "logical part," which is misleading.

> >My physicalism denies that there is some non-physical meaning-object
> Who apart from a philosopher would talk about "meaning-object" in
> order to deny straightforward "meaning" ... I bet you do not use this
> garbage-speech in normal conversation.

I don't normally have conversations about physicalism. But, I assure
you, whenever I do, I use these sorts of words and adopt this style of

> >which is out there somewhere, attached to the word. The word has a
> >use, an effect on ourselves and others, but nothing more.
> Not "out there" anywhere , this is just a lame assumption that everything
> is physical again .... ideas are properties of the mind (phantom pineal
> eye) so do not need physical extension.

Properties of a non-physical object? No good. As far as I'm
concerned, ideas may be properties of physical objects, or
second-order properties of properties of physical objects, but at the
end of the day, it must be a physical property.

> >Yes. Principles are arrays of physical symbols, which may be stored
> >in the physical brain.
> I have looked though the OED and other dictionaries to see if I can
> find anything like your description ... but, *no* ... surprise surprise,
> all the accepted definitions are something like ""a source, root,
> origin: that which is fundamental: a theoretical basis: a faculty of
> mind: a source of action:a fundamental truth on which others are
> founded or from which they spring ".... and more of the same.

How unbelievably obtuse.

I don't have my OED here in LA, so I'll just use the online Merriam
Webster version.

1 a : a comprehensive and fundamental law, doctrine, or assumption b
(1) : a rule or code of conduct (2) : habitual devotion to right
principles <a man of principle> c : the laws or facts of nature
underlying the working of an artificial device
2 : a primary source : ORIGIN
3 a : an underlying faculty or endowment <such principles of human
nature as greed and curiosity> b : an ingredient (as a chemical) that
exhibits or imparts a characteristic quality
4 capitalized, Christian Science : a divine principle : GOD

I clearly didn't mean 2-4 when I said that principles were arrays of
symbols. (And, hey, if you'd had some charity, maybe you could have
seen that. This is an area where bad ethos will get you in trouble.
Seriously, try to engage with respect and empathy. Try to figure out
how I could possibly be right. You just might figure out what I meant
for a change.)

So I meant definition 1. And laws, doctrines, assumptions, rules, and
codes are all statements, usually sentences. Sentences ARE arrays of
symbols. But the key point I was making there is that even principles
which are not written down as scratches on paper are encoded
physically in the brain: they are coded statements. The brain is a
symbolic processor in this sense, just like the Turing machine.
Principles in the brain are complex arrays of brain symbols.

> Are you claiming your strange definition is correct and *everybody*
> else including the Oxford English Dictionary is wrong?

No, I claim that you failed to understand what I said, because you
were looking for a way to make me wrong, rather than looking for how
or why I could possibly be right. You picked the wrong
interpretation, instead of the right one. See how that's a waste of
our time?

> > The brain can't *arbitrarily* change its hardware.
> The brain is very plastic and continually forges new connectivity ..
> the *physical neurons* change ... and not just signal routes. Do
> you deny this? How can ontogeny occur at all by your account?

No, I don't deny this. But changing physical neurons does not imply
changing hardware. Babbage's Analytical Engine, a mechanical
computer, changes its PHYSICAL layout. But it still can't change its
hardware. The hardware is just the part that it can't change. In a
normal desktop computer, the hardware is the stuff you can drop on
your foot, but that's just a coincidence about computers as we build
them today.

> I>t's obviously
> >restricted by the laws of physics, but it also suffers from many other
> >more restrictive limitations. The Turing machine has some obvious
> >limitations, but the fact that it can't change THOSE doesn't mean that
> >it's not just like a brain (in all the relevant ways).
> But it isn't like the brain in ANY way ... the brain is distributed
> parallel.

Look, I keep saying that they share property A, and that A is the
relevant property, the way in which they are both alike, which implies
that they have property B in common. You insist: "No! they do not
share property C! Your claim is refuted!" But, of course, I never
said that they had property C in common, and nothing I said should
lead us to infer that they had THAT property in common. What I DID
say is that property C is irrelevant to the claim that they have
property A in common and therefore prooperty B in common.

They are both equivalent in behavior to a Turing machine. Therefore,
they both share the limitations of a Turing machine. One of them is a
distributed parallel system, and the other one isn't. But (pay
attention now) that is irrelevant to the claim I'm making.

Distributed parallel systems are equivalent to Turing machines in
behavior. They can't do anything a big fast Turing machine couldn't
do. Does that mean that I'd build one? No, but that doesn't matter.
What matters is that they share the same limitations. Whatever
behavior the one couldn't exhibit, the other couldn't exhibit either.

Neither one can change their own "hardware," by definition.
Resistance to damage is irrelevant to *behavior*. They share the same

> You persist with the old hardware/ software language that just doesn't
> cut it when examining neural computers .... there is not software, just
> weight-states ... evolution sculpts the response directly.

What's wrong with calling that software, exactly?

> S>So, I'm not telling you that the brain's circuitry is analogous to the
> S>circuitry of a Turing machine, but that the brain's hardware, the part
> S>that cannot be changed internally, is equivalent to the Turing
> S>machine's hardware.
> There is not a scrap of evidence for this ... what studies do you cite here?
> And which parts of the brain are you saying are non-plastic?

Look, I'm not going to *bother* to cite this no-brainer. Some
characteristics of the brain are non-plastic. The fact that the brain
can't change the laws of physics implies that the laws of physics can,
if need be, serve as the non-plastic element. I'm not making a very
large assumption here, but important conclusions follow from this
obvious point.

> >Now, I want to remind you that I don't share those "feelings" with
> >you. I'm an anti-realist about feelings. We should reinterpret them
> >as purely physical.

> Why *should* we do that ... why adopt such ludicrous and
> anti-intuitional way of talking .. nobody actually does this. I can
> make equally as strong a case that the physical is an epiphenomena
> of the mental, and that everything is an "illusion" and that nothing
> is physical .... but I would rather assume that both physical AND
> non-physical entities can happily co-exist. MVT allows me that ... I
> am not forced to adopt such extreme and unwieldy positions as either
> materialism or idealism (both worn-out philosophical jargon worlds).

Because materialism is scientific. Materials can be scientifically
verified. Phenomena cannot. No other philosophical theory, including
MVT, can claim the scientific high ground. We need nothing more than
the materials to explain everything material. Why invoke the ideal
when we have a science of materials?

> All you are saying is that there is a lessening only in *degree or type* of
> physicalness ... that the phantom effect of the pineal eye is less physical
> than the actual cellular structure that has gone ... whereas I take the
> further view that abstract images can only be formed by an abstract
> sense-organ (memory of physical organ, if you like). The language differs,
> but maybe this is not a significant difference .. the philosophy and its
> jargon just gets in the way here.

If you like. But it seems that one way of using this jargon has
considerably more intuitive appeal than the other, though we disagree
which one has that status. If you like, this may be an argument from
intuitive grounds as to which jargon we ought to use. But this is not
an idle concern. Our jargon shapes the way we view the world and
interact with it. It fundamentally shapes what theories we will
accept and which we will not. So, if that's irrelevant to you, we can
skip the philosophy, and leave it up in the air as to which jargon has
more intuitive appeal.

> >My incapacity to point to something doesn't mean that it's not
> >physical. But no matter.
> I think you have big problems here ... the essence of being physical
> seems to be that there *should* be something you can point to, or at
> least detect by the instrument of physics. Without any objective evidence,
> your claim to the physicallness of such "unpointable to" things is weak
> at best ... and is eminently challengeable by idealists and others.

Not very. All physical objects can be pointed to. You cannot point
at any properties, however. Time is a property of a physical object.
It is not a physical object, but it is a physical property. Same
thing with length. You cannot point to two inches, but you can point
to objects which are two inches long, and no non-physical things have
the property of being two-inches long.

> Sure, both are illusions, holes have locations ... but also feelings
> can be "in my guts" or elsewhere .. just different, internal places.

They can be, but they NEEDN'T be. I picked sadness for a reason.
Sadness isn't anywhere. Metaphorically, it's in the heart, but we all
understand that as poetry, not a scientific claim about the facts of
the matter. If physical (in)capacities have physical objects as
substrates, then sadness cannot count.

> >Well, I'll just adopt this phenomenologist strategy to answer you
> >point above, then. Time may be a measurement, but it's a "physical"
> >measurement, whatever "physical" means.
> "Physical" here might mean "hallucinatory" or "mental" .... fictional
> even? What exactly do you mean by "physical" since you use it
> to describe some very, very different things. Can one thing be
> "less" physical than another? If so, then you have a scalar system,
> and at one extreme of the scale will be "non-physical" things.

Because even under idealism there will be a difference between the
hallucinations of brains and minds. The one will be a "physical"
hallucination, under that wacky definition, but even then, the mind
STILL won't be "physical." Same old problem, even under a radically
bizarre definiton of "physical."

> >Again, you cannot get away from this problem by using idealism,
> >because you still have all the same problems, slightly modified for
> >idealism. What's the connection between sensations-of-brains and
> >sadness? How can you solve the problem of other minds with idealism?
> >All you have access to is sensations of their behavior, and of their
> >brains. How can you conclude that they have a mind on that basis?
> Yes, which is why (if you have ever read any of my MVT publications) I
> am careful to avoid identification with any of the philosophical camps ...
> I trot out the idealist position just to counter your physicalist
> assumption,

But it doesn't help *your* position, because you fall under attack
from both sides. If I have to, I'll take the more general view that
it doesn't matter which is right, idealism or physicalism, because,
whichever it turns out to be, your theory is either wrong or

> but I actually think the distinction between phyical and mental is blurred,
> the shape of the body changes the shape of the mind, and the shape
> of the mind affects the shape and action of the body (Aristotle).

Aristotle didn't know, just as Descartes didn't know, that the
physical world is causally closed. That was our original problem,
you'll recall.

> >You might be taking a page from the anti-realist's book and say "I
> >cannot be wrong about there being a mind there, because the mind
> >itself is on the same ontological level as an illusion. If I believe
> >that there's a mind there, then there's an illusion of a mind there,
> >so there's a mind there, because a mind is just an illusion of a
> >mind."
> Exactly, this is the pointless circularity of philosophy divorced from
> facts and observations of nature.

Read that again... I was characterizing a view which you might be
holding. Are you? I assume not, but if not, it's not obvious what
you WERE saying.

> >But then I can make a similar move back in the physical realm: "I
> >can't be wrong in making belief statements about minds, because having
> >a mind is just the same as our 'believing' that there's a mind there."
> >(Again, scare quotes to remind you that I'm reinterpreting 'belief' to
> >be purely physical.) This, obviously, is just the Turing test: if it
> >"seems" to have a mind to us, then it does.
> Just word games, absolutely content-free!

If that's the case, then the difference between MVT and MVT' is nil.
One has some extra non-scientific content-free statements. Hmmm.

> I have a scalar view which is actually neither physicalist, idealist or
> dualist .... MVT is adequate on all explanatory accounts, whereas your
> fictional and unbuildable Turing machine view is no better than a
> medieval supernaturalist account .... you have no evidence.

Name calling. And, I argue, the reverse of the case: MVT FAILS under
all explanatory accounts. It's no good under idealism, no good under
physicalism, no good under dualism (even scalar dualism), it's just no
good, unless you resort to equating MVT with MVT', which is an
acceptable interpretative maneuver, but makes your theory no theory of
consciousness at all.

> S> I don't accept that "consciousness" is a handy term for a physical
> S> phenomena at all ..
> >Fine. But don't accuse me of contradicting myself, because when *I*
> >use the word "consciousness," I'm referring to physical
> >characteristics.
> But when you refer to absolutely anything at all you must prefix "physical
> characteristics" .... so what. I make a distinction between "physical minds"
> when talking of E-2 animals controlled by their phyical (cellular) pineal
> eye,
> and "non-physical" or "abstract" minds in the E-1 case where no cellular
> structures remain. How to you propose to word this distinction?

What's wrong with "E-2" and "E-1"? "present/absent" eye? Stupid and
smart? The possibilities are endless. You can even use "non-physical
mind" if you like, so long as you understand that as shorthand.

> >No strictly *chemical* effects, but the "psychological" effects are
> >physical effects. The placebo has an indirect physical effect to cure
> >the patient, by making the patient "think" that they are being cured,
> >and, for example, relieving stress, and bolstering the immune system.
> >All physical all the time. The placebo effect *is* physical.
> But your epiphenomenalism has to show that everything runs exactly
> the same whether people can "think" or not, and the placebo case
> clearly refutes this. "Thinks" is a consciousness term.

I put scare quotes around "thinks" to indicate that I meant my
anti-realistic "thinks." It's the behavior of your thinking, the
physical part of your thinking, the bit that, formally speaking, a
philosophical zombie could exhibit without having any thoughts at all.
Thinking and acting like you're thinking are not the same; the second
part is what I mean when I say "thinks."

Simply by making the patient do the physical part, the placebo works.
No mental elements need be involved. "thinks" is a shorthand for
brain states, not for non-physical ghoulies and gohesties which you'll
never observe.

> >Still, it would all be physical. Physical words would affect my
> >physical brain, which would affect my physical body.
> The sound, or written symbol, of the word I can accept as "physical"
> under the normal usage of the term, but not the "meaning" or
> interpretable content or significance, "that thought which is conveyed"
> by the word. Can you make this distinction, or is the "meaning" as
> physical to you are the air-wave/ sound or light-wave/ image?

"meaning," if applicable at all, is in "use." The "referent" is the
thing that we'd probably point to if asked "what's the referent of
this scratch on paper?" It's also in other scratches on paper, in
other physical symbols.

Yes, meaning is as physical as an air-wave.

> >In the brain. It is encoded in the layout of the neurons and their
> >connections, just as computer code is encoded in the layout of
> >transistors on a chip. The program is there. It can be changed
> >externally, by culture, experience, etc. but the program is there,
> >just as a robot's program is on its hard disk.
> But no one can read the internal state of a neural computer
> because it is a dynamic system ... the memory does not "exist"
> as stored until some hook or other activates a state similar to a
> previous pattern that evokes a "memory". I think you are confusing
> "program" with "constraints" .... a neural computer "learns" by exposure
> to lots of examples .. and makes its own patterns from these ..
> This is how a humanoid infant learns to talk .. by being talked to ..
> and listening to voices in the world, and NOT by being given a
> book on the rules of grammar or a English primer. Same with neural
> computers ... no programmers or software .... just reward for correct
> and absence or reward for wrong, and it figures out the patterns for
> itself.

When I simulate a neural network that does this on my desktop, is it
exhibiting a program or constraints? I'm not conflating these;
they're the same. The presence or absence of a human programmer
doesn't imply that it's not a program.

> >It only answers Leibnitz as well as functionalism does. But if you
> >understood the objections against functionalism, you'll see that you
> >still haven't captured what mental realists want to capture about
> >"feelings." To have certain physical capacities is not to have a
> >feeling. It's easy to imagine someone with (or without) those
> >capacities who has or doesn't have feelings. They're different in
> >definition, in principle. And you haven't explained the link.
> The link is in the identification with phantom pineal eye sensation
> as "self" owned or originating .... we "own" our thoughts and feelings
> in a way that your robot cannot, it is a closed physical system, whereas
> we have a "non-cellular" component.

No, this is not an explanation. You don't explain why we necessarily
have a feeling when we have physical (in)capacities. Here, you just
re-insist that we DO.

> S> The abstract, phantom pineal eye is not physical in the same way that
> S> the pineal gland is physical: thus it could interact ....
> >Sure, not physical the same way capacities are not physical. But
> >that's not enough.
> The whole-part fusion of physical brain with phantom organ of generic sense
> is enough to provide us with an experiential gestalt, we can reintegrate
> action-potential signals into the whole experience because our brains
> instantiate a sensor out of the very same type of action-potential signals.

No. There's no gestalt when one link in the chain is missing: the
link between the physical (in)capacities and the feelings, on your

> But you cannot move from "can" to "does" ... so your speculation
> carries no more weight than the supernaturalists, perhaps the
> Scientologists who think we are possessed by Thetan demons ..
> both accounts work in theory to cover any potential type of experience,
> but neither are true in practice.
> Start from the phenomena, look at how the brain IS constructed, and
> the case that the brain is a neural computer (or cluster of) is irrefutable.

Neural computers, all of them, are Turing machine equivalents. They
are equivalent in behavior. They share their limitations. If one of
them can't be free-willed, then neither is the other. If one of them
can't have feelings, then neither can the other.

> >When we design a functional human-equivalent AI, it will obviously be
> >a Turing equivalent. Will you be in the camp of philosophers who will
> >insist that it still cannot be conscious?
> Go ahead and do it ...... if you can. But even if you achieve it YOU still
> deny it can be "conscious" because of your strange physicalist beliefs.

I asked you a question. This was not an answer.

> And you are talking only about a "human EQUIVALENT" .. I and MVT
> are talking about the real things and how they actually DID evolve, not
> some simulation fiction!

The simulation tells us facts about the reality. It's like how math
tells us about physics.


      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT