Steve Nichols wrote:
> >I had not heard the news that brains were infinite-state. Last I
> >heard, atoms were all finite-state, so a finite clump of atoms must
> >necessarily be finite-state.
>
> >The brain has very very many possible states, but it is no more
> >infinite state than a hundred billion thermostats would be
> >infinite-state, if you wired them all together in an interesting way.
>
> @ But the brain can self-organise and forge new neuronal patterns.
> @ Furthermore, it is a massively parallel distributed system, and
> @ *not* a Turing machine or von Neumann computer. Cite Kohonnen &c.
>
> @ I doubt that it is ever in the same state twice!
I doubt that as well! But cite von Neumann, Turing and Church for
pointing out that a Turing machine can do everything a massively
parallel distributed system can do given enough time and tape. We
use massively parallel systems today because they are faster than
cheaper than doing these jobs serially, not because there are some
things that massively parellal systems can do but Turing machines
can't.
> @ Any thought is possible, there are no boundaries to imagination.
Yes, there are. Try to think of a REALLY BIG NUMBER. Now think of a
number that long with all of its digits randomized. You can imagine
it as an opaque term (just like most of us can only imagine a number
as large as a trillion) but you can't actually hold it in your mind.
Alternately, try to imagine all of Shakespeare's Hamlet at once.
Hey. That's sort of like the memory limitations on a computer. Hmm.
> @ Apparently there are more possible moves in a game of chess than
> @ there are atoms in the universe, and there are presumably an (infinite)
> @ different possible games of chess.
Ah, no. Here again you've mistaken really really big with infinite.
There quite obviously AREN'T an infinite number of games of chess,
because there are a finite number of legal states that the board can
be in.
Actually, you're right if you count games where the players keep
moving their pieces back and forth in a circle. They can do that an
arbitrary number of times. But I don't think that's quite what you meant.
> This is because we are talking of
> @ dynamic potentials. The thing with MVT is that we recognise a non-
> @ physical component ... this is true infinite-state!
Non-physical components may well be infinite-state. But non-physical
components can't interact with physical ones, so they can't be
measured.
Why do you bother in making claims about non-physical components? Why
not settle for purely physical animal intelligence and be done with
this stupid field of philosophy?
> >John and I actually DO think that the human brain is (analogous to) an
> >oversized Turing machine. We think that it's programmed, and that we
> >make "decisions" the same way that Deep Blue "decides" to move its
> >bishop.
>
> How does such (unsubstantiable) conjecture even begin to explain
> consciousness?
I said that so as to make sure you'd know what Clark and I had in
mind; I didn't expect to convert you just by *saying* that.
> How can any Turing machine, a glorified card-reader, have
> experience?
That's a Really Good Question. But how can atoms, a glorified billard
game, have experiences? ;)
> Anyway, big blue is a CPU machine, not even a neural computer, so is
> absolutely nothing like a brain. Are you saying the DNA is the
> "program" or what other medium are you identifying?
The medium is your neurons. None of them can do anything
non-physical. So your brain can't do anything non-physical. So your
brain is entirely physical. Physical stuff happens in lock-step
(because time happens in very small quanta) so the brain is emulatable
with a sufficiently complex Turing machine.
> Hang about, Deep Blue is an extremely limited program that just plays
> common chess. It is so dumb it can't even play other chess-like games, let
> alone "learn" ... it cannot even start to cope with natural language.
We AI people have a problem. People say: "You can't make a computer
'learn.'" Then we make them learn. They respond to things in their
environment in an interesting and useful way. And people say: "THAT'S
not really learning! THIS is learning!" So we make them do that, and
they say "THAT'S not really learning! THIS is learing!"
Do you KNOW how long people said that we couldn't make a computer that
could improve its own playing style? And to then come back and have
people say "Hah. Chess is easy!" It chills a man's heart, I tell you.
Improving your game of chess is learning. It's simple learning, but
it IS learning. More to the point, we don't do anything more
interesting, on a metaphysical level, than what Deep Blue does.
> What about extrapolation abilities
Deep Blue extrapolates. "Not enough!" I don't care.
> how can programs alter themselves to deal with entirely new
> circumstances?
Not all PEOPLE are good at that. But nonetheless, an adequately
complex Turing program can "deal with" circumstances as new as we can.
> Complexity (more depth) is not any sort of answer. Learning
> abilities are needed.
No. Deep Blue has everything we need as far as learning chess is
concerned. Humans have a few general algorithms with lots of entropy
thrown in. We'll find them, if it's worth it, or we'll build our own
from scratch. And we'll program them. Just like Deep Blue.
> > Yes, it does. It has an "desired" temperature, which it must
> > "remember" in order to work properly. The "desired" temperature isn't
> > a property of anything else in the thermostat-heater-room system, so
> > it must be an internal state of the thermostat.
> But precisely my point is it has not control over any "memory" of
> the temperature setting, this is made EXTERNALLY and is not
> within the thermostat's remit.
The same thing happens to us. We don't have free will. Nothing is
within our remit.
> So you are completely wrong in
> anthromorphising that the thermostat "desires" a setting. A digital
> switch is On or OFF, there is no intermediate or internal state, either
> current goes through it or not. It cannot override its programming.
Neither can you. Your programming is more complex, but that's all.
You act according to your desires and beliefs. You cannot change your
desires very much, and you can hardly change your desire to desire
differnt things at all. You cannot change your beliefs either. You
can decide what to say, but you cannot decide, for example, to drop
your view about MVT and agree with me. You have to be convinced, and
you're not in control of when you get convinced.
You're not in control of when you're happy. You do things to make
yourself happy because you desire happiness because that's part of the
program. When you do things, you have only the slightest control over
whether they make you happy or not, and considerably less control over
your desire to feel happy. You're not in control of these things.
A 128-bit thermostat knows the current setting on the heater, along
with the desired temperature down to almost 40 decimal places. It has
128 on-off switches stringed together. It knows 128 on-off facts.
You know on the order of several hundred billion of them. But not
more than that. Not infinite, by a long shot.
> > > > Turing machines are neither conscious,
> > >
> > > >How do you know?
> > >
> > > Well, they would fail the Turing test for starters.
> >
> > <blink blink> ALL Turing machines??? This is to say that we'll never
> > have a computer that will pass the Turing test, that we can never
> > write a program so complex that it could trick people into believing
> > that it acted like us. Were you thinking about this when you said
> > that?
>
> Even passing the Turing test does not suggest consciousness, just AI.
And what suggests consciousness besides intelligence? What test do
YOU have for consciousness besides intelligence? Surely not phasic
transients? I can make my Turing machine flick its eyes about.
> The very fact that every aspect of Turing machines actions can be
> predicted, I would say, might even preclude them from consciousness.
No more than you are precluded from consciousness. Your atoms have no
free will. You are composed entirely of atoms. You have no more free
will than they do. An adequately informed and powerful computer could
calculate your every move in a controlled environment.
> Yes, but as I try to point out, what MVT brings to table is *absent* or
> non-physical, phantom components that complete the circuit!
Right, I'd forgotten that. But non-physical components are no more
explanatory than invoking the soul unless you have an emprical test
for them. How can you show that you're measuring something
non-physical, rather than just physical brain activity?
> A cathode ray tube is infinite-state in that it is fully variable, scalar,
> but because it operates within boundaries it is ANALOG. This stuff
> is fairly rudimentary solid-state physics, you have no grounds to be
> obfusticating here.
Analog is NOT infinite-state. It is very-many-state. Really big
number. So big that you can approximate that they're infinite in all
the equations and get correct results. But they are NOT
infinite-state. The analog to infinity is only *really big*. You
seem to keep forgetting that.
A hundred billion is a really big number. Close enough to infinity
for most purposes. But not these purposes. Because it implies that
if I hook a hundred billion thermostats together, or give my Turing
machine a hundred billion feet of tape, my machine could do whatever
your brain and your analog equipment can do.
That's the difference between infinite-state and awfully-many-state.
It's the difference between transcedent and awe-inspiring.
> No, phasic transients occur simply because a circuit is undergoing
> transformation from finite-state (lock step) to self-organising ....
> and they happen after the removal of an external clock (whether
> electronic or organic pineal eye). Aren't Turing machines always
> lock-stepped?
Why, yes. Along with ATOMS.
I've not seen data that shows that phasic transients occur on account
of a circuit going from finite state to very-many-state, but I'll take
your word for it, because it hardly matters. Very-many-state is not
infinite state, so a Very-large Turing machine can do the job nicely.
> >Look, you're overlooking the very simple point that in order to make
> >any kind of induction, you need to first NOTICE a correlation.
>
> Absolutely. The main body of MVT concerns comparative brain
> anatomy and behavioural difference between E-2 and E-1 animals.
Ah. That's good news. But you just got through telling me that
intelligence is not consciousness, around line 150 or so. So while I
really do hope you have an excellent theory of intelligence on your
hands, a perfect theory of intelligence is, unfortunately, a
non-explanation of consciousness.
> You obviously haven't read anything of MVT. All mammals and birds
> are E-1, and have REM. No cold-blooded animals have REM. The
> pineal eye atrophied across all species during the reptilian/mammalian
> boundary, and during the emergence of endothermy (internal or warm-
> blooded strategies). There is a clear experimental correlation between
> absence of pineal input (after pinealectomy, or when pineal eye has
> been covered by metal foil and subject reptiles compared with a control
> group) and intelligent behaviour ... *awareness* ... I don't really like to
> use the "C" word!
Actually, I knew that tidbit. But then you're observing intelligence,
not consciousness.
The problem of other minds asks: "I know I'm conscious and
intelligent, and I know you act intelligently. But are you conscious,
or do you just act that way?" This is a problem which your theory
doesn't solve. Keep your pants on, a solid theory of animal
intelligence will still win you the Nobel, but it won't help you solve
the mind-body problem.
>
> >But you can't observe any such cases, thanks to the problem of other
> >minds.
>
> Yes, as I mention above, there are about 130 years of records of such
> experiments. Other minds is an artificial lingoistic problem, it
> doesn't stop consciousness (sentience) happening, just gives philosophers
> something to argue amongst themselves about.
Yeeeeessss. But so is the mind-body problem. Why do you even CLAIM
to have solved it? Why not sit pretty with a theory of animal
intelligence and let the philosophers do their jobs on consciousness,
for which you have no physical explanation?
> Newborn infants seem to have empathic abilities, plus abilities
> to monitor and judge emotions in others.
So does a lie detector. Another input device to attach to a complex
Turing machine.
> It is a fair bet that if someone
> is screaming in pain, particularly if they have correlating signs such
> as a red-hot poker sticking up their arse, that they are actually feeling
> pain, in much the same way you would. I really fail to see the problem here,
> other than that you cannot be the other person so have to rely on reports.
^^^^^^^^^^^
That IS the problem! The problem is that I have no proof that brain
activity implies actual feelings (in anyone but me), only with the
behavior. This is a rather boring philosophical problem. I recommend
that you ignore it, that you never think on it again, and sell the
world your theory of animal intelligence. Most importantly, never
again use the word consciousness. You have no proof of that. Nor any
reason to care! So skip it.
> >Maybe you have something simpler in mind. Maybe you're just positing
> >MVT as a theory to explain how and when things can pass the Turing
> >Test. But you fail on THOSE grounds, too: you provide no *mechanism*
> >by which the phantom pineal eye causes people to be conscious, or to
> >act conscious, or, well, anything. You only claim that consciousness
> >DOES happen, and you tell us WHEN, but you don't explain HOW.
>
> On the contrary, MVT takes Melzack's neuromatrix theory of self (and
> gateway theory of pain) a step further, and does explain how experiences
> are identified by the brain as being self-originating (neurosignatures &c)
> or not. The deep structures of the brain evolved concurrently in early
> evolution with their main sensory information supplier, the median or
> *primal* eye (not "third eye" ... it predates lateral eyes). The brain
> expects
> information from the median eye ... and when it doesn't come from external
> light, generates phantom information instead.
All physical. Why does THAT correlate with consciousness? Why not
settle for intelligence?
> NO. Fodorian modules and central executive theories, supervenience
> and all the other philosophical lingoistic drivel are not physiology
> based, nor do they give a clear account of the *evolution* of
> consciousness.
But neither do you. Consciousness IS philosophical drivel. Let us
keep it. We've reserved "intelligence" just for people like you, so
you have something to prove and talk about scientifically and so you
guys can never run us philosophers out of a job. Why bother attacking
philosophy on this point?
> Experimental evidence can only observe behaviour .... would that be
> acceptable to you? If so, MVT has it in abundance. However, if as I
> suspect you are not happy with circumstantial evidence (correlation
> between REM and dream mentation, even in humans, cannot be
> absolutely proven since it relies on reports of the dreamer) then
> YOU have a problem, because you can never accept any account of
> consciousness, MVT or not.
Why YES. I think we're finally on the same page!
> No-one has come up with a better idea than MVT, which explains both
> walking and sleeping consciousness (24/7). Your Turing machine idea
> doesn't even reach first base, it only models intelligence.
But what do YOU get besides intelligence? Besides intelligent
behavior? Hell, what do you NEED more than than that, besides
philosophical drivel?
> Perhaps I should develop new and irresistible hypnotic applications
> from MVT and enforce belief in it ... would this satisfy you?
It would certainly get you that Nobel!
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:38 MDT