Robert Owen wrote:
> Ken Clements wrote:
Thank you, Robert, what I meant (and should have written) was:
Just say "no" to <qualia>.
Where <qualia> is the meme that says qualia has any explanatory value.
>
> > Just say "no" to qualia.
>
> And say "yes" to what? Surely you don't mean to suggest
> that "subjective experience" is a quantitative state.
I wrote what I did because I inferred that Robert B. was looking over edge into this vast wasteland of mental energy, and I wanted him (and others on this list) to know that at least one other person here had slogged through this morass, and now advises not to go there. Perhaps Robert B. will want to go there anyway, and feed a bunch of his neurons to this meme that he could otherwise use elsewhere; that is his personal freedom.
In some ways I am sorry to be sending this <<qualia>> reply, because it will probably just feed <qualia> some more neurons (I'll bet D. Hofstadter would find this amusing). Given that qualia is entirely subjective, you can search forever in <qualia> for a falsifiable conclusion to test, and not find one. This is one of the keys to survivability for many such memes. If they could be resolved in a reasonable amount of time, they would be, and thus we would move on.
> Do you experience your color "red" as an extensive magnitude?
> How "large" would you say "red" is? Compared to the size
> of, say, "yellow"? Or as an intensive magnitude? Which is
> heavier: blue or green?
Somehow I do not think the question "How large is red?" would give me as much trouble as "What is Mu?". No, I do not experience 'my' color "red" as an extensive magnitude because I have yet been able to get access to the level of neural processing at which it is so (or at lease is a population density of firings). This does not bother me at all. If I had the resources to devote to the endeavor, I am sure I could have myself attached to a real-time tomographic cranial oxygen uptake monitor, and study this. Instead, I would rather just wait for MNT and get direct access.
>
Well, I have worked on this in another context, over 20 years ago. The
context was not color, but the qualia of letterness in computer
recognition of printing. The system we built had special hardware for
feature extraction in binary images such as automatic population
distributions of all species of 2X2 bit patterns (called "quad fours"),
and other topological data in an image. A typical difficult case is
trying to decide if you are looking at the letter "c" vs. "e", which
usually differ by only a few pixels at best, and can be very close when
the cross bar in "e" has been reduced to a dot.
> While you're at it, work out your definition of the sensation
> red (i.e. the COLOR red -- not the wavelength of the light
> associated with it--we already know that). We will need this
> definition to program our AI mechanism so it will know when
> it is experiencing "red". Otherwise it will fail to stop for the
> light! [No spectrometric devices permitted here -- we are
> concerned with the process of color-perception, not color
> discrimination or the frequency analysis of radiation.
> Spectrometers don't know about "red" because they don't
> know about anything)].
Although the low level code has access to the quad four counters and base image correlation counters to calculate the e-ness or c-ness of the image, at a certain point the higher level code does not, and just gets the e-ness vs. c-ness. This is because the system has been trained by looking at thousands of cases of "e" and "c", and from these cases internal parameters have established low, medium, or high qualities of e-ness or c-ness. The high level software does not know where medium level e-ness ends and high level e-ness begins, it just knows it it looking at an image with high e-ness, and that, if it also gets a low to medium level of c-ness, it should call it an "e".
>
> I believe it is in order to assert with some emphasis that
> those who cannot distinguish the experiential reality denoted
> by the metaphor "Mind" from a manufactured assemblage of
> material components designed to perform predetermined
> operations called a "Machine" should confine themselves to
> discussing mechanics. I don't mean to be rude -- but 18th-
> century science should be discussed using 18th-century
> concepts - don't you agree?
Well, perhaps, in the sense that some people prefer 18th-century music compositions to be played on 18th-century instruments. Also, I understand that one must be willing to study the work of the past if one expects to avoid it's pitfalls. The philosophers of the 18th-century were just as smart as we are today, but they did not have the tools we do and could not do the experiments that we can do today. Twenty years ago we wanted to do plenty of experiments that we could plan, but not do, because the cost of the computing power was out of the question. Today most of those can be done on a cheap desktop, and soon the supercomputer of today will be the cheap desktop of tomorrow. In a few years, MNT will allow us replicate brains, and put to rest many a <Mind> that have been kicked around by thinkers for millennia.
-Ken