> So your transhuman AI would be intelligent but not conscious?
Yep, although probably not for very long.
> Do you believe then in zombies, beings which act conscious but which
> actually are not? Or would you say that any intelligent computer program
> would (in some sense) know that it is not conscious?
It'd notice differences between itself and humans. It probably wouldn't have an "I think therefore I am" certainty. It certainly wouldn't stand up and start debating qualia.
> Moravec has an interesting thought experiment in which a CA model like
> Conway's game of Life runs for long enough to evolve living organisms
> which then develop intelligence. Would you say that this is possible?
Plain intelligence? Sure. Probably the vast majority of races across the Reality are non-conscious until they Transcend. We're exceptions, but, quite obviously, the only consciously observed exceptions in the absence of a Singularity.
> If so, could they then evolve communication, societies, emotions, fiction,
> philosophical speculations about the meaning of life? Or would some
> of these avenues be foreclosed to them because they are computationally
Meaning of life? Nil problemo, although, not knowing about the qualia of pleasure, they'd have considerably less reason to speculate that an External morality exists. On the other hand, just the existence of reality would be enough to inform a really advanced thinker that life ain't Turinc-computable.
> I don't see where to draw the line here. I don't see anything stopping
> these beings from evolving intelligence and emotion and all the other
> characteristics we would associate with consciousness.
Speak for yourself. A dog has emotion. Does it have qualia? Who knows? I don't see why emotions should be noncomputable. In fact, I think I understand emotions, and I don't see any noncomputable parts.
> What gives us
> the privileged position to step in and say that although they act like
> they are conscious, and claim to be conscious, they actually are not?
Nothing. If an AI says it understands what qualia are and it has them, and it supplies either a satisfying explanation for consciousness or a good explanation of why we have a mental "blind spot" that prevents us from understanding the explanation, I'd probably take its word for it. I just don't expect that to happen; I expect the AI to agree with me.
> The one legitimate argument I could see along these lines would be the
> idea that computability is simply too weak a mechanism to produce minds
> like ours.
Minds? No. Qualia? Yes.
> Penrose claims that our minds rely on some form of ultra
> computability, which goes beyond Turing computability.
Penrose is, no offense, being silly. His whole argument from Godel is silly. It's been stomped into oblivion by any number of people. I respect his physics and his neurobiology, but his math and cognitive science is absurd. In other words, his reasons for believing in noncomputability are stark wrong; he just happens to be right anyhow and to have reasoned forwards in a useful way.
> Presumably in
> this model no matter how long you let a computer evolve or how hard you
> work at programming it, it will never show the full level of functional
> mental competence that human beings have (including, per Penrose, the
> ability to do mathematics as well as humans). A more powerful primitive
> is needed to allow that level of functionality.
I don't think that our qualia provide a whole lot of new functionality. Probably our brains take a physical shortcut for efficiency and the qualia are a side effect. So while it might take more computing power - at worst, orders of magnitude more power - you wouldn't see any qualitative difference in intelligence - no different *kind* of thinking.
> Although I don't think there is much empirical evidence for this position,
> it is more philosophically attractive than accepting the existence
> of zombies. It seems that rejecting computationalism requires you to
> accept one of these two possibilities.
Not really; I regard qualia as a sort of oddball side effect. Not epiphenomenal, but not tremendously important either. No, let me rephrase that - *extremely* intriguing, but our *current* minds aren't doing much with it.
> > I'm extremely conservative when it comes to reality. I'm willing to
> > believe that quarks are objectively real. I'm willing to believe that
> > qualia are objectively real. I don't believe in the laws of physics,
> > mathematical theorems, apples, or any other abstracted properties.
> I am having some trouble following you here. What do you mean, you don't
> believe in the laws of physics? Didn't you say earlier that you thought
> they were "real" (and hence malleable)?
Let me actually amend that a bit. Qualia are real, but they're fairly exotic (read "really weird"), which in turn leads me to believe that it may be possible to construct other exotic real things - such as, perhaps, the things we call "the laws of physics", or trans-qualia for transhumans, or the hypothesized External morality that underlies my ethical system.
What I don't believe in is that physicists will someday write the One Equation on a blackboard without knowing "what breathes fire into the equations and makes them live". The first cause, the breather of fire, is experimentally detectable; it has to show up in the equations. In fact, the equations should be incomplete without it, or you've got the wrong equations.
The one thing that got seared into my memory by my attempt to formalize instantiation is never, ever, ever believe in epiphenomena. *Anything* real has to be experimentally detectable, including the property of reality itself. Anything "real" has exhibit different behavior than things that are "not real". Otherwise you've got epiphenomena, a zombie theory of reality. A Turing computation, being a Platonic object, proceeds just the same whether it's "instantiated" or "not instantiated" in our reality. This is probably the fundamental reason why nobody will ever define instantiation; zombie theories of reality are as silly as zombie theories of consciousness.
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way