Re: Placebo effect not physical

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Mon Jan 08 2001 - 01:21:56 MST


Steve Nichols wrote:

> >Yeah. "Why can't I, a world-reknowned jazz musician, be considered a
> >sculptor? Sculpture is nothing more than a barrier to keep
> >'non-sculptors' out."
>
> When you, or I, am sculpting we are "sculptors" ... so you can be
> a sculptor whenever you like, though maybe not a world-famous one!

Yes, but my point was that you can't do it by playing a lot of jazz
music. They're just different.

> >No, it has some uses. It has an intension as well as an extension.
> >"Physical" doesn't just describe the set of things which do exist, but
> >the set of things which could possibly exist. It supports
> >counterfactuals, as they say.
>
> This just proves my point. Not only do you define everything that exists
> or has been thought of as "physical" but everything that hasn't been
> thought of as well! What "uses" do you have in mind, I find this
> manoeuvre pointless.

Here, I think you're overlooking the difference between the physically
possible and the logically possible. While it is logically possible
that non-physical things exist, it isn't physically possible. The
physically possible is determined by the correct laws of physics,
whatever they may turn out to be. The logically possible is bounded
only by the rules of logic. It is logically possible for you to
psychokinetically cause a fork and spoon to dance in the air, but, as
far as we know, this is physically impossible.

The point of figuring out what is physically possible should be
obvious to you; it has been the project of physicists and scientists
for centuries, and may well engage us for far longer than that. The
utility of this knowledge has proven itself over and over again.

One characteristic of physical objects are the objects such that it is
physically possible that they could exist. This does not include
everything that could ever be thought of. Fairies, for example, are
physically impossible, but could obviously be thought of. Fairies are
not physical objects: they are fictional magical ones. They are
logically possible, but physically impossible.

> You are also committing a well-known error by conflating "exists"
> with "physical" (the same strategy that was used in one of the
> well-known arguments for the existence of God).

I don't know that one. I know quite a few of these, but it'd be
interesting to hear another. Or maybe you have a different gloss on
an old argument?

> >More to the point, it seems to be in opposition to theories like
> >yours. If "physical" meant nothing, or everything, you should be able
> >to show that, under my own definition of "physical," your theory is
> >physical.
>
> Absolutely .. under *your* definition of physical MVT is physical!
> I just don't find any value in your definition.

I raised that point to emphasize that MVT is *not* physical, and,
therefore, if physicalism is correct, MVT is wrong. My definition of
physical does *not* include non-verifiable mental objects.

> >But you can't do that. You theory inherently relies on
> >non-physical elements. (Yet you try to claim the scientific
> >high-ground and resort to name-calling with cries of "medievalism.")
>
> This is just the way we define things, I don't rely on any "elements"
> that are not conceivable within your system, it is just that you want to
> call everything physical. Maybe the phantom eye concept is one of
> your "counterfactuals" .... but MVT offers several levels of explanation
> and doesn't demand a leap of faith into the non-material. I am
> happy to provide a descriptive level how the "phantom" pineal eye
> operates in terms of neural information, glandular and action potentials,
> and thus is experienced in the same way that externally originating
> sense-data/ neural information is experienced (as conscious "behaviour").
>
> There is an ADDITIONAL level that MVT/ Third Eye (Kether, Eye of
> Shiva, Dharma Eye) can be understood that is beyond and above the
> limitation of your syntax and knowledge, possibly, but this meditational
> awareness state is not what we are discussing, as I am aware it goes
> beyond what can be described by biology and physics. Cultural & mental,
> yes, with "consciousness correlates" that have been identified by most
> human-era societies as the "source" of spirituality ... but not particularly
> identified with your Charvaka (materialist) view ... the philosophy of ash
> (or dust ... the view that everything reduces to dust or atoms).
>
> MVT promises to reduce supernaturalism as well as materialism, in
> that it explains phenomena in more natural, basic terms. I do not
> think, whatever you say, that "time" and "meaning" can be transformed
> into atomic quantities (dust in India 2,000bc was thought to be the
> smallest particle). If your physicalism is anything more that a tautologous
> definition (which I assert it is) then you should have some evidence that
> everything is atomic.
>
> S> >Well, my physicalism denies that symbols are anything more than their
> S> >physical part. e.g. Writing is nothing more than scratches on paper.
> >
> S> So all scratches on paper ARE writing? Ridiculous!
>
> >Forgive me. Scratches on paper that are USED in a certain way by
> >people. No non-physical component, unless you consider their use as a
> >kind of "logical part," which is misleading.
>
> But your physicalism "denies that symbols are anything more than their
> physical part. e.g. Writing is nothing more than scratches on paper" but
> now in response to my point you add a qualifier about "USE" made by
> them, which seems an intentional quality rather than anything physical.

I was *correcting* myself. Note the use of "forgive me." While use
is "intentional," "intentions" are properties of physical objects, and
are thus physical properties. There's no non-physical thing at work
here. Just physical brains and physical bodies emitting physical
sounds and making physical scratches on paper. The scratches on paper
bear physical relations to other physical objects.

> >Properties of a non-physical object? No good. As far as I'm
> >concerned, ideas may be properties of physical objects, or
> >second-order properties of properties of physical objects, but at the
> >end of the day, it must be a physical property.
>
> But this is pure conjecture, just your "idea", what evidence do
> you back this with? I can back up MVT with scientific experiment
> and observation.

Excuse me? Reread the claim that I made. I said that there are only
physical objects and properties of physical objects. You do *not*
have an experiment to show that there are non-physical objects. You
might think that some of your experiments make it natural to assume
that there are some non-physical objects, but I happen to find that
notion rather unnatural.

Aside from various intuitions that the physical fully describes what
can possibly exist, I also have the advantage of history.
Technological progress has largely been a picture of getting the gods,
ghosts and ghoulies out of the way. It has been a bet that
traditional science is the finest tool available, and it has largely
paid off.

It's clear that mental non-physical objects aren't verifiable by
today's science. Behavior and brain states can be verified, but
consciousness can't. We've agreed that it's a matter of aesthetic
taste as to whether your experiments imply that the "phantom eye" is a
non-physical object or whether all there is to it is conscious
behavior.

>
> >1 a : a comprehensive and fundamental law, doctrine, or assumption b
> >(1) : a rule or code of conduct (2) : habitual devotion to right
> >principles <a man of principle> c : the laws or facts of nature
> >underlying the working of an artificial device
> >2 : a primary source : ORIGIN
> >3 a : an underlying faculty or endowment <such principles of human
> >nature as greed and curiosity> b : an ingredient (as a chemical) that
> >exhibits or imparts a characteristic quality
> >4 capitalized, Christian Science : a divine principle : GOD
>
> >I clearly didn't mean 2-4 when I said that principles were arrays of
> >symbols. (And, hey, if you'd had some charity, maybe you could have
> >seen that. This is an area where bad ethos will get you in trouble.
>
> Ah, but my system can cope with ALL the definitions, whereas your
> physicalism in my opinion fails to account for any of them.

Primary sources are physical objects. Faculties are properties of
physical objects. As for the divine principle, physicalism obviously
denies that. What's wrong with the way I "cope?"

> >Seriously, try to engage with respect and empathy. Try to figure out
> >how I could possibly be right. You just might figure out what I meant
> >for a change.)
>
> Within your strict set of closed definitions that everything (even
> undiscovered)
> MUST be physical, then you are right, but not otherwise. Anyway, how
> do you (of all people) expect me to *read your mind* when all you give out
> (and can give out) are physical scratches that I have to read literally!

Empathy, of the ordinary kind. It's a talent.

> You are what you say .... whereas I am far, far more because my view
> allows for the possibility of any physical *or* non-physical properties.
> I can identify with and "understand" fictional characters for instance,
> although these fictions cannot be reduced to atoms.

I can "understand" them, too, though I doubt we'll agree about them.
As far as physicalism is concerned, fictional characters just don't
exist. They aren't atoms or anything else. When we accept some truth
claims about fictional characters, we simply use the sentences in a
certain way, we physically interact with the physical words and brain
symbols.

> >So I meant definition 1. And laws, doctrines, assumptions, rules, and
> >codes are all statements, usually sentences. Sentences ARE arrays of
> >symbols. But the key point I was making there is that even principles
> >which are not written down as scratches on paper are encoded
> >physically in the brain: they are coded statements. The brain is a
> >symbolic processor in this sense, just like the Turing machine.
> >Principles in the brain are complex arrays of brain symbols.
>
> But in a Turing machine everything is explicit, and somewhere on
> the tape you can read these arrays. In the brain they cannot be
> observed or detected. What are "brain symbols" anyway, I have never
> heard of these .....

Brain symbols are abstract entities; they're the way information is
encoded in the brain. It's intentionally pretty vague so as to make
it possible to have a discussion about them without necessarily
knowing the details. We know that the brain stores information
somehow; when we refer to brain symbols we invoke "whatever it is that
the brain is doing there."

Every part of the brain can be observed, in principle if not today in
practice. (You're not a Penrosian, are you?) The brain follows all
the ordinary physical laws, and we can watch it do so under a
microscope. Some aspects of brain functioning are opaque to us today,
but as nanotechnology develops we'll be able to observe the details
even more closely. The brain "cannot be observed" only in the sense
that the workings of the Pentium III cannot be observed: they're both
very small and hard to watch closely with today's technology. But
they're both Turing equivalents. (Equivalents. Not machines.
Equivalents.)

> S> Are you claiming your strange definition is correct and *everybody*
> S> else including the Oxford English Dictionary is wrong?
>
> >No, I claim that you failed to understand what I said, because you
> >were looking for a way to make me wrong, rather than looking for how
> >or why I could possibly be right. You picked the wrong
> >interpretation, instead of the right one. See how that's a waste of
> >our time?
>
> It is not my job to argue your case or to pretend to agree with
> you when I do not see any merit in your argument. If I have a choice
> then you are not presenting a very water-tight case.

No one can present a case so water-tight that you can't interpret it
in some perverse way. This is a matter of interpretation so as to
maximize correctness. I'm not saying that at the end of the day you
need to agree with me, but you need to interpret what I say such that
it is as agreeable as possible. This may turn out to be less
agreeable than your own position, but until you do the earlier work,
you'll never know. It's easy to find dumb interpretations, and if you
stop there you'll never understand your opponents' arguments.

> To be honest, I still fail to see how (any kind of) "principle" can
> be physical.

"Principles" (of the first kind) are sentences. Do you see how
sentences can be physical?

> I don't ask you to agree with my views unchallenged.

Neither do I. But you should expect me to interpret your words in a
charitable way. Discussion is hellish if not impossible without it.

> There are records of young car crash victims who lived an apparently
> normal life, but on autopsy it was discovered that only 5% of their brains
> had been functioning! If brain damage occurs at an early enough age
> it is plastic enough that any part of the brain can take over functions of
> any other part ... so you are wrong in fact on this issue.

I knew this, and nothing I said contradicts this. All I insist is
that the brain has a non-plastic property or two, that it's not protean
in quite *every* way. The characteristics which it can't change are
the hardware. The ones that it can change are the software. The
hardware of the brain happens to be Turing equivalent. Some of the
hardware of the brain, like the laws of physics, can't break at all
(and so it's obviously nonsense to talk about self-repair in those
cases). But some of the hardware of the brain can break in a
non-reparable way. When it does, the brain is as helpless as a Turing
machine with a broken servo. Hardware need not be fragile, but the
brain does have some.

> >They are both equivalent in behavior to a Turing machine. Therefore,
> t>hey both share the limitations of a Turing machine. One of them is a
> >distributed parallel system, and the other one isn't. But (pay
> >attention now) that is irrelevant to the claim I'm making.
>
> >Distributed parallel systems are equivalent to Turing machines in
> >behavior. They can't do anything a big fast Turing machine couldn't
> >do.
>
> We have already decided that *real-time* responses cannot be
> simulated.

Irrelevant. The limitations to which I refer are not speed-related.
I'm happy to admit that neural computers can outstrip serial Turing
machines in speed in solving some problems. (Though not all.)

> Neither could a big Turing machine have evolved.

Also irrelevant. Being evolved or not has nothing to do with
behavior, nothing to do with today's limitations. You're telling me
properties they don't share. I agree with you about these. I'm
telling you some properties that they DO share.

> The brain can develop entirely new modules (eg. the neocortex)
> in response to changing environmental demands, how would your
> Turing machine do this?

You'll hate my answer. The brain can't develop entirely new hardware.
Entirely new physical layout is not hardware: it's software. Hardware
is the part you can't change, by definition. Neither can the brain
develop software beyond what its hardware allows, neither can the
brain develop software in some non-deterministic non-random way. So,
what can the brain do? Develop "new" modules following the rules
encoded in its hardware. The Turing machine could do that, too, if it
were fast enough. (And, again, speed is irrelevant to my claims about
general limitations.)

> >Neither one can change their own "hardware," by definition.
>
> Wrong, see above. I suppose you might be right in the case of
> some artificial silicon neural-computers, since these are designed
> and manufactured, but this is not true of brains, which are plastic.

I think you missed my point here. The definition of hardware is "the
characteristics of the system which cannot be changed internally."
The brain has some of those.

> S> You persist with the old hardware/ software language that just doesn't
> S> cut it when examining neural computers .... there is not software, just
> S> weight-states ... evolution sculpts the response directly.
>
> >What's wrong with calling that software, exactly?
>
> Because it isn't arrays or streams of signals (code), it is electrical
> charge only .... there is no symbolic equivalent. You get the two
> types of computers to behave similarly, but the method by which they
> reach this behavior is not equivalent or even similar. Your Turing
> or von Neumann machine also has electrical pulse, but this is not
> what you mean by "software" ... and nor is the same as "weight states"
> as these carry different information than logic gate arrays &c.

They only need to have a few characteristics in common to be "similar"
in the way I mean. As far as I'm concerned, transistors and wooden
rods are "equivalent" for the purposes of this conversation: the rods
of Babbage's engine are equivalent to the transistors in modern-day
desktops. They have little in common (and heaven knows that no
wooden-rod computer could have evolved under normal circumstances),
but just enough to make my point. The computers themselves are
analogous to individual neurons, but the arrangement of neurons is
analogous to the connection of many computers, of many Turing
machines, which is equivalent in behavior to one big fast Turing
machine.

If anything, you might try poking holes at my claim that many Turing
machines hooked together are equivalent to one big Turing machine
(since, of course, the one big one couldn't do what the many little
ones could as quickly, or in quite the same way) but the analogy
between a neuron and a Turing machine is pretty strong.

> >Look, I'm not going to *bother* to cite this no-brainer. Some
> >characteristics of the brain are non-plastic. The fact that the brain
> >can't change the laws of physics implies that the laws of physics can,
> >if need be, serve as the non-plastic element. I'm not making a very
> >large assumption here, but important conclusions follow from this
> >obvious point.
>
> The Laws of Physics might be unalterable, but where do these disagree
> with the proven facts (can provide car crash refs if needed) that the brain
> (pretty much all of it) can be reconfigured for different tasks, and that
> cells
> and neurones are constantly dying, creating new connections, and even
> physically migrating during early brain formation.

Look, we agree that almost all of the brain can change, but I hope you
notice that almost completely plastic does not imply completely
plastic. There's some non-plastic properties to the brain. At least
one. For example, the brain can't change the laws of physics: the
brain has all the properties which physical objects must have, and
can't change any of those. I'm making a very small claim when I say
that there's at least one property that doesn't change. I understand
that most of it can change. It's the bit that can't that I'm
interested in. The bit that can't is the brain's hardware.
Everything else is software. That's because the hardware just is the
part that can't be changed internally, by definition. The brain has
some hardware. That hardware is analogous to Turing machine hardware,
which also can't be changed internally.

> >Because materialism is scientific. Materials can be scientifically
> >verified. Phenomena cannot. No other philosophical theory, including
> >MVT, can claim the scientific high ground. We need nothing more than
> >the materials to explain everything material. Why invoke the ideal
> >when we have a science of materials?
>
> But only "materials" can be verified (molecular descriptions?) by
> science, whereas you widen the claim for physicality to everything,
> even principles, which cannot be verified as materials. No way.

Sentences *are* physical. I can not only point sentences, I can even
drop them on my foot. You KNOW something's physical when you can drop
it on your foot.

> Yes, ditch the philosophy by all means. It is quite inadequate.
> I argue for a post-human aesthetic that can embrace the
> powerful new vocabulary and world-view that MVT offers. The
> many various mental phenomena can all be described using
> the evolutionary narrative (scientific) offered by MVT, whereas
> the old philosophical jargon offers nothing. I disagree with you
> that functionalism solves Leibnitz Law , by the way, on any
> level other than by linguistic contrivance. MVT offers a constructive
> reconciliation by means of the virtual generic sensor as a bridge.

Here, I think, you just referred to an argument which you might
present, but didn't actually bring one to the table. Obviously a
functionalist might say the same about MVT.

> >Not very. All physical objects can be pointed to. You cannot point
> >at any properties, however. Time is a property of a physical object.
> >It is not a physical object, but it is a physical property. Same
> >thing with length. You cannot point to two inches, but you can point
> >to objects which are two inches long, and no non-physical things have
> >the property of being two-inches long.
>
> God is a fictional character .... he cannot be pointed to so is not
> a physical object ... but how is he a "property of a physical object?"

Fictional characters aren't properties of physical objects. They
aren't anything at all. We accept some truth claims about them, but
that's a relationship between us and the sentences, not between
anything and the characters.

> What you call a "physical property" could equally well be described as
> a "concept."

The words "property" and "concept" are often used interchangably, but,
of course, one of them is held by the object with nobody else around,
the other is something that we have relative to the object.

> >Because even under idealism there will be a difference between the
> >hallucinations of brains and minds. The one will be a "physical"
> >hallucination, under that wacky definition, but even then, the mind
> >STILL won't be "physical." Same old problem, even under a radically
> >bizarre definition of "physical."
>
> Sure, idealism (an human-era philosophical device, like physicalism)
> gives some bizarre conclusions ... but no more bizarre than your
> view, and equally (or even less) unprovable.

Again, you're missing my point. My point is that you raised idealism
as if it were an argument for MVT, when it is anything but.

> S> but I actually think the distinction between phyical and mental is
> blurred,
> S> the shape of the body changes the shape of the mind, and the shape
> S> of the mind affects the shape and action of the body (Aristotle).
>
> >Aristotle didn't know, just as Descartes didn't know, that the
> >physical world is causally closed. That was our original problem,
> >you'll recall.
>
> Quantum physics allows (theoretically) non-causal effects, action at
> a distance and so on.

We can get rid of these with MWI, but, if you like, we can even accept
the Copenhagen interpretation with spooky action-at-a-distance stuff
and still notice that all you get in the world are determined effects
and random effects. Neither of these are mental effects.

> We also cannot describe what "Laws" are in operation at the point of
> singularity in a black hole.

So what? There are no black holes in the brain. The laws of physics
operate just fine there.

> On macro-level description, Aristotle seems to be correct. And we
> can think, plan and mentally model (internal thought, not possible
> to E-2 animals!?) action before manifesting it physically. The mind
> affects the body.

Deep Blue can and does internally model the board before making a
move. (Will you at least grant the machine that? It literally has
symbolic pictures of the board stored in its RAM. That's a model,
by anyone's account.) But no one's mind affects Deep Blue when it
does this.

> If the loss of the pineal eye explains how & why modern brains AND their
> accompanying mental life evolved, then this is our best shot at clearing up
> associated problems (Globus' and Menakers's theories in physiology) and
> getting rid of philosophical quandaries. I challenge you to suggest another
> natural/ physiological scenario that resolves Leibnitz Law ... Putnam
> just doesn't cut it.

Yet you claim it's a matter of opinion. As far as I can tell, your
account is "deeper" than Putnam's only in your claim that
consciousness is not-having certain brain capacities. But you claim
that Putnam doesn't cut it, that having certain capacities is not
consciousness. But if *having* capacities is not consciousness,
you've got quite an uphill battle to show how *not* having capacities
is consciousness. The intuition is obvious: capacities are different
from feelings. They're logically distinguishable. But the link
between them is unexplained in Putnam's account and in your own.
If Putnam doesn't cut it, you don't cut it either.

> >And, I argue, the reverse of the case: MVT FAILS under
> >all explanatory accounts. It's no good under idealism, no good under
> >physicalism, no good under dualism (even scalar dualism), it's just no
> >good, unless you resort to equating MVT with MVT', which is an
> >acceptable interpretative manoeuvre, but makes your theory no theory of
> >consciousness at all.
>
> MVT doesn't fail .... Idealism, Dualism and Physicalism fail!
> Bye bye to failed lingoistic philosophy. None of these accounts
> resolves the mind-body problem .... physicalism might try to deny
> the mental, but still cannot overcome the Identity = Interchangeability
> issue because the thoughts of the brain ARE NOT the cellular matter
> of the brain, however you cut it (even as "properties" or "conscious
> behaviour.") They may correlate with, but are not "fully interchangeable"
> since the language used to describe each type is different.

Look, you've got interchangability problems, too. If the pineal eye
is a feeling, then it's interchangable with mental statements, but
it's not interchangable with claims about cellular material. A
feeling, on your terms, is more than not-having cellular capacities.
But if the phantom pineal eye is just the absence of the pineal eye,
then it's not a feeling.

You say it's "abstract," but that's just an opaque term to hide the
fact that you haven't solved this problem at all. We have different
beliefs about feelings and physical stuff. We say different things
about them. If you conflate the physical absent capacity with the
feeling, you'll feel real good about yourself, but you won't have
solved the problem.

> But if you claim this, then "brain states" and "thinks" are the same thing,
> so you can substitute one for the other in any sentence without changing the
> meaning? Is this what you are saying?

They are, of course, different parts of speech. To be precise: "is
thinking" is interchangable with "is in certain brain states." "is
thinking about X" is interchangable with "has certain brain symbols
which refer to X."

> >Yes, meaning is as physical as an air-wave.
>
> I can measure sound in decibels, or with an air sock.
> It has location, speed, frequency and other physical
> properties.
>
> How do I measure "meaning" ..... what tests can I do
> and what physical measurements do I take, using
> what instruments?

Lexicography. Sociology. Anthropology. Meaning is use. You look
and see how people use the words. You write it down. You publish
articles about it. You participate in the writing of dictionaries and
encyclopaedias. Did you not notice that Linguistics is a science?

> >When I simulate a neural network that does this on my desktop, is it
> >exhibiting a program or constraints? I'm not conflating these;
> >they're the same. The presence or absence of a human programmer
> >doesn't imply that it's not a program.
>
> I disagree that a serial simulation is identical with a physical, parallel
> instantiation ... they may exhibit behaviour that we think is equivalent,
> but one is algorithmic and provable at any point, and the other is not.

Both are algorithmic, but one has a more complex program than the
other. Look at actual parallel neural computers. There is absolutely
an algorithm which completely describes their behavior. It's a bitch
to write down, but it's obviously there. Stringing together
algorithmic components makes for a more complex algorithmic object,
not something radically non-algorithmic.

> Of course, reciprocally, there is not task that can be performed by a
> (dumb but fast) Turing machine or von Neumann that cannot be done
> using parallel distributed architectures ..... but I do not claim that
> somehow von Neumann processors are REALLY neural computers,
> or that neural architecture has *anything whatsoever* to do with their
> design or evolution. They are just a (less optimised) alternative!

Turing machines ARE neural computer equivalents. But that says more
about the neural computers than it does about Turing machines.

> >No, this is not an explanation. You don't explain why we necessarily
> >have a feeling when we have physical (in)capacities. Here, you just
> >re-insist that we DO.
>
> The medical literature on why (in ALL cases of recorded traumatic
> organ loss) we have phantom sensations explains this point adequately.
> There is a necessity about such phenomena, not a voluntary choice.

Again, I agree that people feel that way. You don't tell me anything
about the mechanism by which people feel. You tell me about the
mechanism by which people act like they're in pain. But you can't
show me how the pain links up with the brain states.

> >No. There's no gestalt when one link in the chain is missing: the
> >link between the physical (in)capacities and the feelings, on your
> >terms.
>
> The neurosignatures are generated along with 'phantom eye'
> information thereby identifying this information as "self" originating,
> but distinct from signals originating at the retina and so on.
> I accept Melzacks' neuromatrix theory of self, by and large.

Again, this is not enough. Identification as "self" originating might
be the same as feeling, depending on what you meant by
"identification." If it IS the same as feeling, you've left it a
mystery HOW this "identification" feeling happens, which is different
from (ta-da) identification brain states. If it isn't, then you've
left the feelings mysterious and told me all about brain states.

No solution.

> >Neural computers, all of them, are Turing machine equivalents.
>
> No ... as discussed above, I can simulate all the behaviours of a
> Turing machine on neural systems, but they are not identical.

Read what I said. Read what you said. I said "equivalent." You said
"identical." You ignore what I say and knock down a straw man who
thinks that they're identical. What's the point? Pay attention.
"Equivalence" and "identity" are radically different relations.

To be "equivalent" is to have one relevant property in common. To be
"identical" is to have ALL of one's properties in common. If you say
"I need a green thing! bring me a pear!" and I bring you a green
apple, you say "this is not an pear!" I'll say "I know, but this apple
is equivalent to the pear: they are both green."

It is foolish to then say "but pears are different from apples!"
because we both know that. The question is whether they are different
in a RELEVANT way. They need not be identical to be equivalent. Get it?

> You accept that the Turing machine is running neural simulations in
> lock-step (very fast, but not INSTANT) .. and so you fail on your
> equivalence claim just on the real-time property alone .... and I
> think there are further differences.

The fact that they differ on one property, or two, or a hundred, or
even more properties, doesn't imply that they aren't equivalent. They
only need to share the relevant properties (which, as far as I'm
concerned, are only one or two).

I'm definitely starting to tire here... as school starts up again, I
may have to drop this end. We'll see how much longer I can go.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:17 MDT