Re: Sentience

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Wed Dec 20 2000 - 18:48:45 MST


Steve Nichols wrote:

> >Don't be silly. I commit no Athena fallacy: I have no account
> >WHATSOEVER about how mind comes about, say nothing of one like you
> >describe.
>
> Exactly, the Athena Fallacy IS that you have no account of how the
> mind came about. The only human-been philosophical resource is
> the method introspection ... which is an inadequate tool.

<quibble>To say that someone has committed the Athena fallacy is to
say that they have some *incorrect* view as to how the mind comes about.
I am in *ignorance* as to how the mind comes about. I don't *know*.
I can't possibly be comitting the Athena fallacy on account of
this.</quibble>

> If you are making the linguistic point that it makes no difference talking
> about "phantom limbs" or "sensations experienced as if there is a
> phantom limb, " the OK, but the *feel* of the phantom limb is real.

Sure.

> I have pointed out that I don't care if you phrase your description of
> consciousness as " a phantom median eye" or as "the experience as
> if a median eye was present" ...... but the vernacular does not impact
> on the phenomena, your whole posting is an attempt to confuse the
> issue ... clarification requires elimination of "epiphenomenalism"
> gobbledegook and all the other philosophical pseudo-language.

But if the phantom eye is a feeling, a mental object, then you're no
better off than when you started. We began by saying "I wonder how
the physical world is related to these mental objects!" if you reply:
"they're linked through the phantom eye!" and it turns out that the
phantom eye is, itself, a mental object, then you haven't answered the
question.

> >Brains *cannot* act independently of chemical stimuli.
>
> How is a hypnotic suggestion or verbal cue a "chemical stimuli?"
> The brain of one person will react in response to suggestions
> given by another person (or TV &co). Change of state, both physical
> and conscious.

My error. I was interpreting "chemical" to include physical stimuli
as well, without telling you. Anyway, the point stands that the brain
cannot act independently from its external physical stimuli; that it
is entirely determined by them.

> >They are
> >purely physical objects, with no more independence from the world than
> >rocks have. Biological organisms are completely physically
> >determined.
>
> If we were, then we could not experience internal light with no
> outside reference or causal source.

You're relying on the philosophical parts of MVT to defend this claim;
these are the very points which I reject.

> I am rather insulted that you compare me (and everybody else) with
> lumps of rock.

People were offended when they were compared with apes. But the
analogy is apt.

> >How do you "generate" a non-existent entity?
>
> But psychological entities do exist ....
> I take note that you have deleted my points about holes, which
> your schema fails to cope with, so I repeat:
>
> [snip your lengthy point about holes]
>
> How do you deal with holes (distinct from GAPS, which depend on
> the fact that there was ONCE something present but no longer, whereas
> a hole might be integral)?

Yes, I cut them out because I agreed with them. My response to this
was that it is *just* as mysterious how *existing* physical objects
could cause conscious experience as it is that *non-existing* physical
objects could cause conscious experience.

"NO!" you say. "It solves the Leibniz objection!" No, it doesn't.
See, non-physical objects can't move physical objects around, and they
CAN'T MOVE HOLES AROUND EITHER. They can't in any way affect the size
or shape of holes; to do so, they'd have to move the *stuff* near the
holes, which, you concede, is impossible. Neither does it help us to
say that the *holes* cause consciousness. Because, while you can show
that the holes are necessary for conscious behavior, you can't show me
how the *holes* can affect the non-physical world, either. You just
say that they DO affect the non-physical world, but you *don't* say
how *holes* can do that.

You can say: "Listen, not-having a median eye causes consciousness
reports, and other intelligent behavior." And I say, great! This is
fantastic news. That's MVT'. But then you go on to say: "What's
more, not-having a median eye causes consciousness itself!" and I say:
How can you possibly prove that?

I deal with holes by noting that they have no more explanatory force
than non-holes.

> >Brains are like Turing machines in that they are composed entirely of
> >simple physical parts that do simple physical things. These physical
> >parts move deterministically in lock-step. (Though, to qualify this
> >view, I'm a fan of many-worlds.)
>
> What parts of the brain "move?" ...

Neurotransmitters flushing about in your synapses, along with ions
pumping through your membranes, allows electro-chemical signals to be
sent throughout the brain and down your axons to your muscles, glands,
and other parts of your body. These are the brain's "moving parts."

> the dualist problem has been described
> precisely that the pineal gland (or any brain part) cannot be "moved" as,
> say, a hand can be moved.

Right. Those neurotransmitters move around only on account of other
physical phenomena pushing them about, via diffusion, chemical
reaction, protein restructuring, etc. All physical all the time.

> Also, the point with parallel computing is that things happen in
> PARALLEL, not in serial lockstep.

True, but this can be emulated by a Turing machine: in a given quantum
of time, it updates the first part, then the second part, then the
third part, ... and finally the Nth part. Then it updates the next
quantum of time, etc. Thus a Turing machine is functionally
equivalent to any parallel machine.

> >Moreover, the physical world is causally closed. Nothing non-physical
> >interacts with the brain.
>
> See the verbal cue example above.

Ah, but verbal cues, while they aren't *chemical*, are entirely
*physical*.

> Yes, nothing non-physical interacts with the PHYSICAL bits of the
> brain, but thoughts (dreams, and other non-physical stuff) can
> interact with the NON-physical gap in the brain. Consciousness is
> the infinite-state feedback loop between the brain and the
> environment made possible because the brain expects to receive data
> from a peripheral sensor that has gone missing.

Right. So: a red-wavelength beam of light hits my eye. It hits
red-receptors in my eye, which send electrochemical signals up to the
brain. How does this signal make me have the conscious sensation of
red? All I can see it doing is causing more physical activity in the
brain. The fact that there's no medial eye there doesn't seem to help
this explanation out.

Similarly, suppose my consciousness has made a decision. It wants to
send a message down to my hand to type the word "red". How does this
signal get to my brain? Once again, the fact that there's no pineal
eye doesn't answer the question. And, not coincidentally, no signal
from my consciousness is necessary: my brain seems to be doing all the
work *already*: that signal from my eye triggered a purely physical
complex system, which sent the message back down to my hand to write
the word "red." How did the consciousness help this process? Once
again, the non-existence of the eye doesn't help.

Let me predict an answer: "the holes are in just the right places and
in just the right shape such that they were intimately involved in
both physical systems! The non-existence of the pineal eye explains
why you wrote the word 'red'; what's more, that 'red' signal passed
right through where the pineal eye would have been!' You're sure
right about that. But all this is part of MVT', the part of MVT
that makes only scintifically verifiable statements.

What you didn't explain is what the holes have to do with
*consciousness*. Why does the fact that the pineal eye
would-have-been there imply that your conscious mind had a
*sensation*? How did your conscious decision get the holes into the
right places?

> >How do you "generate" a non-existent entity?
>
> By "generate" a phantom eye, I don't imply any active processes, it
> is more like coins rolling down a ramp and falling into the
> correctly sized holes, giving the appearance of intelligently
> sorting coins into piles. The pathways to and from the old median
> eye (which predate even the visual cortex) expect that data
> travelling along them is from the outside world, whereas in E-1
> animals it is not! That is why I think consciousness is a trick of
> nature.

Conscious behavior IS a trick of nature. But what about consciousness?

> >Your tests establish a physical causal link
> >between the real pineal eye and a lack of intelligent behaviour. As
> >the eye goes away, assuming you're right about this, intelligent
> >behaviour begins to flourish.
>

> I find it hard to agree with your Epiphenomenalist assumption that
> everything that has happened in the universe to date would have
> happened in exactly the same way if no animal had ever been even
> remotely aware of anything, and that we are robots following out a
> behavioural script. This is not my experience.

This is like replying to the claim that the earth spins and revolves
around the sun with the claim "this is not my experience." From here,
it looks the same either way! It looks the same whether the sun goes
around the earth or whether the earth goes around the sun. Only when
we attempt to do some scientific inquiry does the correct view come to
the foreground.

Similarly, from the inside, it looks the same whether you're making
decisions or whether you're acting on externally determined desires.
Indeed, I argue that you've been externally determined to believe that
your desires are internally determined. But when we do some science
we see that only physical elements are at work, and that non-physical
elements play no role in the way we behave.

"Look and see!"

> >No. We will find structural behavioral analogs, but we will never
> >figure out why we see anything at all. We'll be able to figure out
> >*when* we see something, but not how.
>
> So you seem to agree with McGinn ... a very defeatist stance, and very
> unextropian! You haven't managed to answer my point against this
> that, and I repeat since you avoid answering here:

It's only "defeatism" if you're giving up on the possible. Is Godel's
Incompleteness Proof pessimistic about the possibility of designing a
formally complete arithmetic? No. It shows that it's impossible.
Every optimist should accept "defeatist" arguments of that kind.

> S> Perhaps I should develop new and irresistible hypnotic applications
> S> from MVT and enforce belief in it ... would this satisfy you?
>
> This example was deliberately crafted because it underlines the
> pre-eminence of consciousness ... and goes against epiphenomenalism.
>
> How do you answer this method of "proving" MVT?

MVT' may well have some useful applications. You would not prove MVT,
but only MVT'. You still would not show how holes affect
consciousness, and would not have shown that consciousness can affect
holes.

> I fully intend to pull the turf away from under your feet! "Philosophy"
> (love of argument) cannot be roped off and protected in the way
> you would like ... weasel words cannot stop the march of progress.

Progress can march on. Why, if we never talked about consciousness
ever again, we'd lose nothing at all. Intelligence is all that
matters. Consciousness is a joke.

Why progress towards an understanding of a stupid word that
philosophers made up? Why not grant that they made it up so as to
keep you from leaving them out of a job?

> So I don't accept your notion of MVT' ... because knowledge is seamless.

No, claims can be analyzed into parts. MVT is a compound claim about
observable facts and unobservable facts. I accept MVT's observable
claims and reject MVT's unobservable claims.

Suppose someone argued for the Fairy theory of physics. The Fairy
theory of physics is exactly like our own modern theory of physics,
except that it insists that each time two objects come under force,
two undetectable fairies come by and dance on each one, then promptly
vanish.

You'd say: "No, the Fairy theory is stupid. I accept all the bits
about observable stuff but reject the bit about fairies dancing on the
particles."

What would you think of me if I said: "I don't accept that notion
... because knowledge is seamless!"

That's what I think of your rejection of MVT' on those grounds.

> The mind-body problem ... only like can interact with like, therefore how
> does the (observable) brain manifest (non-observable) mentation .... can
> be dissolved by the MVT explanation in terms of a illusionary sense-organ
> which is nevertheless continuous (even supervenient, in Jaegwon Kim's
> mereological sense of this silly word, perhaps) with the brain matrix.

No. You don't explain how consciousness can interact with holes.

> An illusionary body part can deals with the illusionary world of conscious
> events.

The illisionary body part isn't there *at all*. You have "dealt with"
the world of consciousness only if you argue that "like affects like:"
that the world of consciousness can interact with non-existent objects
because consciousness doesn't exist in the first place.

> >It uses a different algorithm from the one Kasparov uses. But
> >Kasparov uses an algorithm too. (His is certainly better, but it's
> >hard to read off from the neurons, and ever trickier to program.)
>
> The human player tends to use entirely different approaches, Casablanca
> for example said that he only ever looked one move ahead, "but the best
> move!"

Yes, but his nerves did the rest, unconsciously. I rarely think very
hard about breathing, but it's rather complex.

> Anyway, you haven't explained away the programmers needed for Deep Blue ...
> Kasparov can modify whatever heuristics he uses as a result of feedback from
> outcomes, but bugs in Deep Blue needs programmers .... homunculi.

I'm saying that it's a coincidence that programmers were needed to
program Deep Blue while none were needed for Kasparov. Deep Blue was
designed by people. Kasparov looks designed because he evolved. Deep
Blue could have evolved, if the evolutionary pressures went the right
way, but they didn't, so we got us instead.

What more do you need?

> Will get round to dealing with your other points when I have more
> time. But to date, nothing you have said convinces me much. Why do
> you bother to plow a trough which you think is sterile and
> pointless? Science can throw up new solutions and information, but
> it seems your type of tautologous speculation can never get
> anywhere.

No, it won't. In fact, I think something quite radical about
consciousness, that gets us quite far indeed, but we're talking about
something else at the moment.

> Best to ASSUME we have free will, even if cannot ultimate
> know it to be so ... if we assume fatalism and determinism then no
> point ever doing anything? (Fatalism and determinism just as much
> conceptual make-believe as free will incidentally).

Determinism does not imply fatalism. I still need to get up in the
morning, or else I won't be happy. And I want to be happy. (Though I
don't choose to want to be happy.)

Yes, determinism vs. free will is a pointless debate. Except for when
people start saying that computers can't be like us because one of us
is determined and the other is not. That's when we have to have the
argument.

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:38 MDT