Re: A Spring-Powered Theory of Consciousness (6 of 6)

From: Eliezer S. Yudkowsky (
Date: Mon Jun 19 2000 - 23:54:31 MDT

Haven't finished reading this yet, but from the skim:

> On the
> other hand, the title of Eliezer's article has an awfully
> top-down ring due to the word "coding",

Certainly guilty on the "top-down" part... that's the whole point of
CaTAI, to present a functional decomposition of intelligence that
enables us to actually *know what we're doing* as we write each line of
code. Of course, as a method, this is quite agnostic with respect to
which layers of cognition are specified code, and which layers are
trained concepts, or learned thoughts.

> and his notion of a
> "codic cortex" or "codic sensory modality" seems to swamp the
> usual notion of a sensory modality.

I am not extending the term "sensory modality" in any way whatsoever! I
am using it in exactly the standard sense! Humans need to write code
using abstract, conscious thought precisely because we do not have a
codic cortex. It's nothing intrinsic to code. People who have their
vision restored after a sufficiently long period of blindness need to
reason consciously and abstractly about visual perceptions - just as we
now reason abstractly about code. It doesn't mean that vision, or code,
is an intrinsically abstract task.

I again say that a codic cortex would perceive code. Not write it.
Perceive it. The visual cortex doesn't design skyscrapers. The codic
cortex doesn't write spreadsheets.

I should probably expand on the distinction in future versions of CaTAI.

> I wonder how, using
> Edelman's schema, one would select "values" for a codic sensory
> modality without instantly facing all the difficulties of
> traditional AI, and in an even more concentrated form than usual
> (we don't want a robot that can just sweep the floor, we want a
> robot than can write a computer program for a newer robot that
> can program better than it does!).

No, that's the ultimate goal of seed AI. You don't need to bite off
that chunk all at once. First you code the modality, then you train the
concepts, then you teach the thoughts... The way to get an AI that can
redesign its own source code is to start with an AI that can perceive a
single function.

> It seems like a bit of a leap
> from "Blue is bad, red is good" to "COBOL is bad, Java is good"
> ;->.

That's the trick - creating a codic modality that can perceive
COBOL-style and Java-style as gestalt features of the code, as visible
as the difference between a "smooth" texture and a "wavy" texture.
(Note that a blind human who regains vision late in life, or an AI
without a visual modality, would need to reason consciously to
distinguish between smooth and wavy...)

This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:13:52 MDT