Wei Dai, <email@example.com>, writes:
> Thanks for the explanation, Hal. I think I understand Eliezer's reasoning
> now, but I have a couple of objections to it. First I think it is possible
> to have a computational theory of consciousness (or qualia) that doesn't
> depend on a definition of "instantiation of a computation". See for
> example http://www.escribe.com/science/theory/index.html?mID=325.
I think this is a fascinating and promising approach, but it does come from a somewhat different perspective than standard computationalism. It brings in the idea of multiple universes, and it is not clear that a theory of consciousness should depend on that. It also violates one of the standard premises of computationalist models, which is that it doesn't matter what your computer is made of or how it works. This idea may be a little too far from the philosophical mainstream for most people to accept.
> Second, even if we really do need either a definition of instantiation or
> non-computable physics to explain qualia, I do not see sufficient reason
> to believe that the latter is more likely to exist than the former. The
> only justification I have seen is Eliezer's (and other's) failed attempts
> to find such a definition. But people have presumably been trying to find
> examples of non-computable physics also, and they have also failed so far.
Perhaps it is easier to believe that new physics will be found than that we might be wrong about our philosophical ideas?
> I hope Eliezer will explain to us which approaches he tried when he
> attempted to find a definition of instantiation, why they failed, and why
> he doesn't believe other approaches will eventually work. What about the
> one given at http://pages.nyu.edu/~jqm1584/cwia.htm?
I don't think this idea works. Note that Jacques Mallah, the author, also views his ideas as preliminary and perhaps not fully satisfactory.
He alludes to a problem with Chalmers' attempt at the same thing, which is the difficulty of fixing the notion of states and substates of a physical system in a way which doesn't allow "cheating". Chalmers wanted to say that each element of the system had to have a fixed location, but that is clearly far too limited.
Mallah attempts to remedy this by invoking Kolmogorov (K) complexity to look for the "simplest" mapping of the states in the physical system. I had proposed a similar idea many years ago on comp.ai.philosophy (unfortunately beyond the Deja News event horizon.) But my idea was shot down, and I think the objections apply to his proposal as well.
Two problems were presented to me. The first is that K complexity is uncomputable. That doesn't necessarily mean that it is undefined or can't play a role in the nature of the universe. But if the determining factor of whether a physical system is conscious depends on the K complexity of certain aspects, then consciousness is a noncomputable predicate. This is not as bad as Eliezer's making consciousness rely on uncomputable physics, but it is still a disappointing result.
The second problem is that K complexity is not uniquely defined. For any proposed string, you can find a universal Turing machine which outputs that string using a very small program. You can "cook the books" and find a UTM which will cause any chosen string or finite set of strings to have a low K complexity by the standards of that UTM. That means that you can cheat Mallah's attempt to find only simple mappings and make even trivial systems appear to implement complex calculations.
If K complexity is to determine whether something is conscious or not, which is an objective fact, then we can't have it be variable based on whatever UTM we choose. You would have to augment it by choosing a unique UTM as the pivot around which the universe of consciousness turns, and it is hard to see any basis for choosing one. (You could just choose one ad hoc and have that be your "theory" of consciousness, but it would not be an attractive solution.)