> >> Basically, I used to be a Strong AIer, until I tried to define
> >> computation in observer-independent terms - whether process A
> >> instantiates process B - and wound up with a lot of basic elements
> >> of cognition (many of which made it into _Coding_) but no objective
> >> definition. I concluded that the Turing formalism formalized human
> >> reasoning rather than physical law. Fini.
I agree that this is the biggest question for computationalist theories of
intelligence. We have three seemingly true but incompatible statements:
You can resolve this by denying any of the three. (No doubt true philosophers can find ways of reconciling these, as a fourth possibility.)
Eliezer suggests denying B, the premise of computationalist models of consciousness. We have often discussed the arguments of Searle and Lucas which lead to the same conclusion. David Chalmers goes further and attacks all physicalist theories of consciousness, proposing that consciousness must be an irreducible phenomenon separate from ordinary physical reality.
Other approaches are to deny A or C. Denying C leads to a model similar to Moravec's, where all possible conscious entities are considered to have equal reality. It would be a matter of interpretation whether a given system is conscious; from some points of view your sofa contains the consciousness of Albert Einstein, just as much as his brain ever did.
I lean towards denying A. I hope that Kolmogorov complexity theory (Chaitin's algorithmic information theory) can provide a method for objectivelly measuring the "fit" of a model to a physical system. The only way I can see Einstein in my couch is by an elaborate mapping of physical events to the elements of his consciousness. But I can set up a much simpler mapping from his brain to his consciousness. This gives me an objective basis for saying that Einstein's brain instantiates his consciousness while my sofa does not.
However there are several problems with this. First, Kolmogorov complexity is noncomputable, and so is a questionable basis for determining objective reality. Second, Kolmogorov complexity is only well defined up to an additive constant. I can create a special "measuring engine" which looks at the world in a funny way so that actually my couch does map very simply to Einstein's brain, from its perspective. The additive constant is of unspecified magnitude, and using it we can "cook the books" so that unexpected results like these can squeak by. Finally, even if these defects can be addressed, it still does not lead to an all-or-none decision about whether a system instantiates a given program, merely a numerical measure of how "forced" such an interpretation would be. So we would still be left with some fuzziness (albeit objective fuzziness) about whether various consciousnesses exist.
Wei Dai applied a somewhat similar idea to a multiverse model and came up with a way of describing in probabilistic terms how likely a given consciousness was to exist in a given world. This helps deal with the additive-constant problem because it would only get the wrong answer in a vanishingly small set of worlds (I think). It deals with the fuzziness because we are working with probabilities. It does leave the problem of noncomputability of these measures, but maybe that is not too serious.
In my opinion this is the best way out of the dilemma. We must reject
the notion that all possible computational descriptions of a system are
equally valid. Kolmogorov complexity provides a way out.