hal@finney.org wrote:
>
> Eliezer S. Yudkowsky, <sentience@pobox.com>, writes, quoting me:
> >
> > Plain intelligence? Sure. Probably the vast majority of races across
> > the Reality are non-conscious until they Transcend. We're exceptions,
> > but, quite obviously, the only consciously observed exceptions in the
> > absence of a Singularity.
>
> That's an interesting possibility. So they could evolve intelligence,
> emotions, and many of the other trappings of life similar to ours, but
> they would be lacking qualia. They would not be conscious in the sense
> that we are. They would process information, they would have models of
> the world as we do, it would probably be meaningful for them to use the
> words "I" and "me" in conversations. But something would be different.
They'd have complete personalities, and be likable or evil or whatever. They just wouldn't have Penrose, Chalmers, and Dennett duking it out in the philosophy journals. They might have Godel's theorem and technophobes combining to yield Lucas; anti-mechanists; even talk of sentience not being Turing-computable - but you wouldn't see any discussion of qualia. Instead, you'd just see the standard "Can a machine really have emotions?", debate over the self-symbol and the event-loop of the mind, and, for sufficiently sophisticated technophobes, the Argument from Godel. You wouldn't see people arguing about the irreducible redness of red. I think.
> How would this difference manifest itself? Would it be possible to
> convince such a being that we humans have some "spark", some kind
> of primary, irreducible experience of reality, that they don't have?
> Suppose they aren't sure initially that there can be any experience
> of reality beyond what they have. What empirical test can we offer,
> what capability would we have that they do not? If they were blind, we
> could talk about how we can tell what is happening at a distance without
> having to go and touch it. What can we say if they are blind to qualia?
We can talk about the irreducible redness of red until they lock us in the Galactic Loony Bin.
> Maybe, after all, I don't have qualia in the sense that Eliezer does.
> Perhaps my basic sense of the universe is fundamentally different
> from his. We both react to the same universe and so there is a certain
> basic commonality of representation and reasoning, but perhaps the raw,
> nitty gritty irreducible nature of reality is totally different for us.
> How could we detect this lack on my part?
We can't - our I/O is representable as static data, even though the internals aren't. The only way I have to know that you're conscious is by analogizing your output to my output and assuming that they have the same internal cause. Philosophically, there's no way for me to know that you're conscious, short of opening up your brain (with tools left behind by the Transcendent who used to have my office) and having a look.
> Consider a situation like in The Matrix, where people have their
> brains directly interfaced to a computer simulation. Are objects in
> the simulation "real"? I would assume not. But what is the different
> behavior which would reveal their unreality? Are you saying that there
> is some way of distinguishing any simulation from reality? What is
> the trick?
No, the distinction between simulation/reality isn't what I'm talking about. There might be some real, definite, unarguable definition of "instantiation", just for non-Turing processes. I don't know that there is no definition of instantiation, just that there's no definition within the Turing continuum.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way