On Monday, March 29, Hal <email@example.com> wrote:
>David Blenkinsop writes:
>> Hmmm, having never read up on the Kolmogorov complexity measure, I wonder,
>> this roughly comparable to *entropy*, as in maybe complexity is a more
>> interestly structured kind of "messiness" or something like that? That
>> make a certain amount of sense to me, if consciousness gets an objective
>> definition at least in part from fullfilling a math "touchstone" of minimal
>> complexity . . .
>Yes, I believe Kolmogorov complexity is similar to entropy . . .
>> The problem with this is, if you look at entropy as such, the true
>> of it is apparently quite dependent on the knowledge of any given observer
>> . . .Chapter 4 of Drexler's _Nanosystems_
>> text talks about a system's entropy being dependent on the knowledge that
>> outside observer happens to have . . . doesn't it seem that the Kolmogorov
>> going to be a lot like this, that we're not going to get anything objective
>> out of it, for defining a state of objective material consciousness?
>How, then, would you explain the third law of thermodynamics, which says
>that closed physical systems always increase (or at least never decrease)
>their entropy? This is an objective description of the behavior of
>physical systems, and does not depend on the state of knowledge of any
>observer. Liquids evaporate to form gasses, no matter what anyone
>thinks of the matter.
One thing to recognize right away is that your basic thermodynamics was originally an empirical study of the behavior of things like steam engines. The statistical definitions, and the debates over how statistics should be handled, came later, especially via mathematicians like Maxwell -- complete with half-serious speculation that maybe you could reverse entropy after all, with an intelligent device or "demon" to alter the statistics of fast vs slow molecules in some way. In some sense, "basic thermo" texts are almost sure be based on an empirical shortcut or a beginning model of what the subject is all about. For more information, see Chapter 4 of _Nanosystems_, and also the paradox that Drexler describes for defining the entropy of a "glassy crystal" in the glossary entry for "entropy" at the back of the book. If someone like Drexler can catch the usual treatment of a subject in an out and out paradox, we can be sure that the most usual treatment is only an approximation, and not the whole story.
In answer to your question about why isn't the Third Law violated after all, I should point out that the true statistics of this would seem to be very much a Bayesian (or Laplacian) kind of procedure where entropy is defined in terms of probability, which in turn can depend very much on the state of knowledge of the person doing the calculations. Now, it's my understanding that if Bayesian theorists ever manage to be consistently wrong in their forecasts of certain events, they don't necessarily throw out their previous probability calculations, rather, they try to update their knowledge so that the probability numbers are more predictive or better-informed in the future. Apparently, really well-informed or complete calculations *never* get the theorists into a situation where a fair-minded observer would insist that entropy is sure to be reversed. Systems always tend to evolve so that the observer knows fewer and fewer of the details and entropy is increased, *unless* we open up the system so that an observer, or a "demon", or someone, can *find out more* about the system -- *that* doesn't violate thermodynamics, at all!
As for liquids evaporating to form gases, if almost any reasonably well-informed observer is going to predict that, then maybe it's not too much to ask for Nature to oblige the predictions practically all of the time. After all, predicting *is* a major goal of probability theory, even if forecasts never come with an iron clad guarantee.
>Where the subjective component comes in, I think, is in terms of our
>interpretation of what the system is doing. We define what are liquids
>and what are gasses . . . But the behavior of the
>system is independent of our thoughts.
See my above comment about Nature. Natural events have their own reality, but there is no reason why this real behavior should be inconstent with probability based logical reasoning by rational observers.
>In the case of Kolmogorov complexity, the mathematics does not include
>a subjective element in quite the same way, but it does leak in . . . there
>are an infinite number of such computers,
>and each will come up with a different value for the complexity. You
>can show that, in some sense, they won't disagree too drastically over
>the broad range of all possible strings, but there is still room for
Now we're getting somewhere, this is just the sort of impression that I got about your reference to a complexity definition, to begin with.
> [David Blenkinsop wrote:]
>> Maybe the best approach here is just to admit that there isn't any truly
>> objective separation between conscious matter and unconscious, at least
>> nothing of an objectively *scientific* nature.
>In terms of the dilemma I raised, the question is whether there is an
>objective fact of the matter whether a given system is conscious or not.
>Can you imagine that it is truly possible that some other person is
>not conscious? Not just that they might not be conscious, but that it
>is as legitimate to say that they aren't conscious as that they are,
>and likewise that it is as legitimate to say that your kitchen sink is
>conscious as that it is not.
I'm not so concerned that someone might say that I'm no more conscious than a kitchen sink, since, for one thing, this is a really old philosophical chestnut, called "solipsism", famous for having no truly definitive, formal or logical answer. Descartes said "I think, therefore I am", but you can never be sure that the other guy is *really* thinking, you know, maybe everyone else but oneself is non-conscious, that's the chestnut or dilemma of it, I believe. Again, you have to look to the actual behaviour of what you find in Nature. For instance, if animals behave as though they have a kind of world-understanding, or sentience, why then, they probably do, and if other humans seem to understand things on a much higher, or more symbol-oriented level than animals, why then, they probably do. Any *scientific* separation between conscious and unconscious has to be practical and based on observable behaviors, unlike the implication of your statement about consciousness, that you labelled "C", "It is an objective matter whether a given conscious entity exists". Behaviors indicating consciousness may exist, without consciousness being locally or objectively definable in any *absolute* sense. Again, entropy is, in Drexler's words "not a local property", so perhaps scientific definitions of consciousness can't be local either, even though we'd all report that "it feels as though I'm locally right here".
One thing that sparks my interest here, is that it seems as though we've got an argument for God, or for something comparable to God, as the Great Observer on some mathematical level. If we prove our consciousness through behaviors, and someone else has to *see* those behaviors for any special "consciousness property" to become objectively real, then maybe all this making of "emergent qualities" just has to unify on *some* level? Myself, I would tend to see this as a bit like the quantum Many Worlds theory, in the sense that the ultimate reality apparently stretches out beyond what science can measure directly. So not only does it seem that there may be alternate worlds whose "consciousness supporting" ability we'll never verify, there may be a unity and an objectivity to it all that we'll never measure, as well?
David Blenkinsop <firstname.lastname@example.org>