Date: Wed, 3 Jan 2001 16:12:16 -0500 (EST)
From: Dan Fabulich <email@example.com>
Subject: Re: Placebo effect not physical
S> But I allow myself all the mediums ... so my analysis is more complete
S> than one limited to linguistic analysis...
>Again, we've already got it.
>What does that completeness get you? Does it answer some
>philosophical questions? Then write an article and use some diagrams.
I prefer to make films ... music, pictures & animation plus narrative &
discussion. Interactive software also .... not boring linear articles.
>Finally, your language may be considerably less formal than English.
>But then, your language is so different from philosophy as we know it
>that it becomes its own entity, distinct from philosophy entirely, as
>jazz and abstract dance are to philosophy.
Have cut the irrelevant stuff about artificial languages .... philosophy
already consists of this .. a barrier to keep non-philosophers out.
>Now, conclusively settling the matter on an old philosophical question
>and raising new ones that no one has thought of before IS quite
>valuable. But, even so, one shouldn't be under the misconceptions
>that the language raises questions that cannot be named in English, or
>presents arguments which CANNOT be stated in modern philosophy
I am going to argue that your whole case relies on DEFINING everything
as "physical" and not allowing anything to exist outside this definition ...
you can call it anti-realism or whatever, but your argument is completely
vacuous and circular. If "Principles" and "symbols" and "time" are physical
(ha, what rubbish) then at least you have to accept they are not "physical"
in the same sense of the word as rocks or brains are "physical" ... so what
has this catch-all word "physical" ... none at all.
>Well, my physicalism denies that symbols are anything more than their
>physical part. e.g. Writing is nothing more than scratches on paper.
So all scratches on paper ARE writing? Ridiculous!
>The words which I've sent to you are nothing more than bits on a disk,
>signal down a wire, glowing phosphors on a screen, and are also stored
>in the layout of our purely physical brains.
>My physicalism denies that there is some non-physical meaning-object
Who apart from a philosopher would talk about "meaning-object" in
order to deny straightforward "meaning" ... I bet you do not use this
garbage-speech in normal conversation.
>which is out there somewhere, attached to the word. The word has a
>use, an effect on ourselves and others, but nothing more.
Not "out there" anywhere , this is just a lame assumption that everything
is physical again .... ideas are properties of the mind (phantom pineal
eye) so do not need physical extension.
>Yes. Principles are arrays of physical symbols, which may be stored
>in the physical brain.
I have looked though the OED and other dictionaries to see if I can
find anything like your description ... but, *no* ... surprise surprise,
all the accepted definitions are something like ""a source, root,
origin: that which is fundamental: a theoretical basis: a faculty of
mind: a source of action:a fundamental truth on which others are
founded or from which they spring ".... and more of the same.
Are you claiming your strange definition is correct and *everybody*
else including the Oxford English Dictionary is wrong?
S> I don't think you are using "metaprogram" in the accepted
S> psychotherapeutic sense here, but are changing the idea to try
S> and fit with your philosophical bias. Brains can not only change
S> their "programming" but can reconfigure their own *hardware*
The brain can't *arbitrarily* change its hardware.
The brain is very plastic and continually forges new connectivity ..
the *physical neurons* change ... and not just signal routes. Do
you deny this? How can ontogeny occur at all by your account?
>restricted by the laws of physics, but it also suffers from many other
>more restrictive limitations. The Turing machine has some obvious
>limitations, but the fact that it can't change THOSE doesn't mean that
>it's not just like a brain (in all the relevant ways).
But it isn't like the brain in ANY way ... the brain is distributed
>More to the point, you say that the brain changes its "hardware." I
>say that the very changes which you call changes in "hardware" are
>really changes in "software." They're software because they CAN be
>changed internally. The hardware is, by definition, the stuff that
>has to be changed externally.
You persist with the old hardware/ software language that just doesn't
cut it when examining neural computers .... there is not software, just
weight-states ... evolution sculpts the response directly.
S>So, I'm not telling you that the brain's circuitry is analogous to the
S>circuitry of a Turing machine, but that the brain's hardware, the part
S>that cannot be changed internally, is equivalent to the Turing
There is not a scrap of evidence for this ... what studies do you cite here?
And which parts of the brain are you saying are non-plastic?
>Now, I want to remind you that I don't share those "feelings" with
>you. I'm an anti-realist about feelings. We should reinterpret them
>as purely physical.
Why *should* we do that ... why adopt such ludicrous and anti-intuitional
way of talking .. nobody actually does this. I can make equally as strong a
that the physical is an epiphenomena of the mental, and that everything is
an "illusion" and that nothing is physical .... but I would rather assume
both physical AND non-physical entities can happily co-exist. MVT allows
me that ... I am not forced to adopt such extreme and unwieldy positions as
either materialism or idealism (both worn-out philosophical jargon worlds).
>But, once we do that, yes, I take it that this discussion, as all
>other (philosophical?) discussions, relies inherently on "intuitions."
>But that doesn't mean that the discussion ends here. I suspect that
>we agree enough on our intuitions that by drawing attention to the
>right sorts of facts we can come to agree on the same conclusion, or
>at least clearly identify in what way our intuitions diverge.
But my intuitions are strongly against your physicalism ... which I think is
a pointless tautologous ploy. And we both accept in broad terms the
empirical facts about the atrophy of the pineal eye and emergence of
consciousness/ conscious behaviour *in direct inverse relationship* to the
loss of the physical pineal eye.
All you are saying is that there is a lessening only in *degree or type* of
physicalness ... that the phantom effect of the pineal eye is less physical
than the actual cellular structure that has gone ... whereas I take the
further view that abstract images can only be formed by an abstract
sense-organ (memory of physical organ, if you like). The language differs,
but maybe this is not a significant difference .. the philosophy and its
jargon just gets in the way here.
S> Time is the *measurement* of movement in space .. therefore
S> must be subjective because measurement is integral, not
S> incidental to the concept of time. There is no "time" out there
S> that you can point to ... it is just a concept ... not physical.
>My incapacity to point to something doesn't mean that it's not
>physical. But no matter.
I think you have big problems here ... the essence of being physical
seems to be that there *should* be something you can point to, or at
least detect by the instrument of physics. Without any objective evidence,
your claim to the physicallness of such "unpointable to" things is weak
at best ... and is eminently challengeable by idealists and others.
I'm not picky about whether time is merely
>the measurement or whether there's something to measure. This has
>nothing to do with the fact that holes have places, whereas feelings,
>in the ordinary mental realist sense, don't.
Sure, both are illusions, holes have locations ... but also feelings
can be "in my guts" or elsewhere .. just different, internal places.
>Well, I'll just adopt this phenomenologist strategy to answer you
>point above, then. Time may be a measurement, but it's a "physical"
>measurement, whatever "physical" means.
"Physical" here might mean "hallucinatory" or "mental" .... fictional
even? What exactly do you mean by "physical" since you use it
to describe some very, very different things. Can one thing be
"less" physical than another? If so, then you have a scalar system,
and at one extreme of the scale will be "non-physical" things.
S> Sure, except that the only knowledge you have of the hole is
S> indirect, via your feelings/ perceptions/ logical reasoning and
S> depth perception &c. The illusion of the hole has an illusionary
S> location .. there isn't any "hole" independent of the illusion, or
S> if there is it is impossible to know. Same with any so-called
S> "physical objects" .. but MVT idealism explains how the illusion
S> is produced. I don't need to deny the existence of physical objects,
S> maybe they are there, or maybe we hallucinate them, who knows
S> and who cares. The physical brain is necessary, but not sufficient
S> for consciousness. A self-referential and abstract component is
S> needed in order to interact with the world of symbols & abstraction.
>Again, you cannot get away from this problem by using idealism,
>because you still have all the same problems, slightly modified for
>idealism. What's the connection between sensations-of-brains and
>sadness? How can you solve the problem of other minds with idealism?
>All you have access to is sensations of their behavior, and of their
>brains. How can you conclude that they have a mind on that basis?
Yes, which is why (if you have ever read any of my MVT publications) I
am careful to avoid identification with any of the philosophical camps ...
I trot out the idealist position just to counter your physicalist
but I actually think the distinction between phyical and mental is blurred,
the shape of the body changes the shape of the mind, and the shape
of the mind affects the shape and action of the body (Aristotle).
>You might be taking a page from the anti-realist's book and say "I
>cannot be wrong about there being a mind there, because the mind
>itself is on the same ontological level as an illusion. If I believe
>that there's a mind there, then there's an illusion of a mind there,
>so there's a mind there, because a mind is just an illusion of a
Exactly, this is the pointless circularity of philosophy divorced from
facts and observations of nature.
>But then I can make a similar move back in the physical realm: "I
>can't be wrong in making belief statements about minds, because having
>a mind is just the same as our 'believing' that there's a mind there."
>(Again, scare quotes to remind you that I'm reinterpreting 'belief' to
>be purely physical.) This, obviously, is just the Turing test: if it
>"seems" to have a mind to us, then it does.
Just word games, absolutely content-free!
>If you didn't find the Turing test satisfying, then you shouldn't find
>your "mind-as-illusion" account satisfying. Both are "psychological,"
>whatever we think that means, but neither of them hit the nail on the
>head in conversation with a dualist.
I have a scalar view which is actually neither physicalist, idealist or
dualist .... MVT is adequate on all explanatory accounts, whereas your
fictional and unbuildable Turing machine view is no better than a
medieval supernaturalist account .... you have no evidence.
S> I don't accept that "consciousness" is a handy term for a physical
S> phenomena at all ..
>Fine. But don't accuse me of contradicting myself, because when *I*
>use the word "consciousness," I'm referring to physical
But when you refer to absolutely anything at all you must prefix "physical
characteristics" .... so what. I make a distinction between "physical minds"
when talking of E-2 animals controlled by their phyical (cellular) pineal
and "non-physical" or "abstract" minds in the E-1 case where no cellular
structures remain. How to you propose to word this distinction?
>No strictly *chemical* effects, but the "psychological" effects are
>physical effects. The placebo has an indirect physical effect to cure
>the patient, by making the patient "think" that they are being cured,
>and, for example, relieving stress, and bolstering the immune system.
>All physical all the time. The placebo effect *is* physical.
But your epiphenomenalism has to show that everything runs exactly
the same whether people can "think" or not, and the placebo case
clearly refutes this. "Thinks" is a consciousness term.
S> Yes, but when I say them I mean what I say, and when you say them
S> they are packed together with a unwieldy bunch of provisos &c. in
S> that you should really qualify each statement before using it.
>No. I take the stronger view that my definition is right, and that
>yours is wrong. It's you, I argue, who should qualify your beliefs
>about feelings to point out that you don't just mean the physical
>stuff, but something MORE. But until we agree about this, we'll just
>have to interpret what the other says with charity and humanity.
But your definitions (of "Principles", see earlier) do not exist in any
dictionary ... they all support my, generally accepted, definitions.
S> There is less work for the hypnotist .. but yes, works either case.
S> The point I am making is for the primacy of the mental/ conscious.
S> If I suggested to you under somnambulistic trance that I was burning
S> a cigarette stub on your hand, not only would you experience it,
S> but might well manifest burn marks! (See placebo effect argument).
>Still, it would all be physical. Physical words would affect my
>physical brain, which would affect my physical body.
The sound, or written symbol, of the word I can accept as "physical"
under the normal usage of the term, but not the "meaning" or
interpretable content or significance, "that thought which is conveyed"
by the word. Can you make this distinction, or is the "meaning" as
physical to you are the air-wave/ sound or light-wave/ image?
S> Where is this program located? Or do you just mean cultural
S> conditioning and things you have learnt or been told?
>In the brain. It is encoded in the layout of the neurons and their
>connections, just as computer code is encoded in the layout of
>transistors on a chip. The program is there. It can be changed
>externally, by culture, experience, etc. but the program is there,
>just as a robot's program is on its hard disk.
But no one can read the internal state of a neural computer
because it is a dynamic system ... the memory does not "exist"
as stored until some hook or other activates a state similar to a
previous pattern that evokes a "memory". I think you are confusing
"program" with "constraints" .... a neural computer "learns" by exposure
to lots of examples .. and makes its own patterns from these ..
This is how a humanoid infant learns to talk .. by being talked to ..
and listening to voices in the world, and NOT by being given a
book on the rules of grammar or a English primer. Same with neural
computers ... no programmers or software .... just reward for correct
and absence or reward for wrong, and it figures out the patterns for
>It only answers Leibnitz as well as functionalism does. But if you
>understood the objections against functionalism, you'll see that you
>still haven't captured what mental realists want to capture about
>"feelings." To have certain physical capacities is not to have a
>feeling. It's easy to imagine someone with (or without) those
>capacities who has or doesn't have feelings. They're different in
>definition, in principle. And you haven't explained the link.
The link is in the identification with phantom pineal eye sensation
as "self" owned or originating .... we "own" our thoughts and feelings
in a way that your robot cannot, it is a closed physical system, whereas
we have a "non-cellular" component.
S> The abstract, phantom pineal eye is not physical in the same way that
S> the pineal gland is physical: thus it could interact ....
>Sure, not physical the same way capacities are not physical. But
>that's not enough.
The whole-part fusion of physical brain with phantom organ of generic sense
is enough to provide us with an experiential gestalt, we can reintegrate
action-potential signals into the whole experience because our brains
instantiate a sensor out of the very same type of action-potential signals.
>But as for Turing equivalents, computing in parallel is still
>equivalent to a big fast Turing machine (that is, equivalent in the
>way they behave, which is what I said above). The brain can
>reconfigure some "hardware," but there is other hardware that the
>brain cannot reconfigure. That hardware is equivalent to the hardware
>that the Turing machine cannot reconfigure. Everything the brain can
r>econfigure can be simulated on a big fast Turing machine.
But you cannot move from "can" to "does" ... so your speculation
carries no more weight than the supernaturalists, perhaps the
Scientologists who think we are possessed by Thetan demons ..
both accounts work in theory to cover any potential type of experience,
but neither are true in practice.
Start from the phenomena, look at how the brain IS constructed, and
the case that the brain is a neural computer (or cluster of) is irrefutable.
S> Furthermore, and the key point, they aren't CONSCIOUS but
S> we are ... our neural wetware is capable of manifesting non-physical
S> components and the associated phantom illusions ... do you claim
S> that Turing machines could have, then lose but remember, a pineal
S> eye (or any other body part come to that). I think not.
>When we design a functional human-equivalent AI, it will obviously be
>a Turing equivalent. Will you be in the camp of philosophers who will
>insist that it still cannot be conscious?
Go ahead and do it ...... if you can. But even if you achieve it YOU still
deny it can be "conscious" because of your strange physicalist beliefs.
And you are talking only about a "human EQUIVALENT" .. I and MVT
are talking about the real things and how they actually DID evolve, not
some simulation fiction!
Post-human Council Member
-unless you love someone-
-nothing else makes any sense-
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:16 MDT