Hal Finney challenged:
> Eliezer's coding cortext doesn't seem much more likely. I don't see
> how it could do more than strictly localized parsing and improvement,
> technologies we already have. Beyond that requires a deep understanding
> of the purpose of code. An unintelligent brain fragment won't be able
> to achieve this.
An unintelligent brain fragment could do part of the work, but it
couldn't give itself something to do. It needs the rest of the brain
to do that.
> Is the coding cortext supposed to be more than an optimizing compiler?
> If so, I'd like to hear what it is going to be able to do, and how it
> will do it.
The analogy between this an the intelligent coffee maker doesn't
really hold. This AI does have it as a goal to enhance its
intelligence, but it won't have "enhance intelligence" as a bare goal,
like "make good coffee:" it'll be informed in a very rich manner what
it's trying to maximize. Much of this will probably be done by hand.
Eliezer treats this objection in a section in which he challenges the
idea that intelligence enhancement is a self-referential goal.
Quoting Eliezer from CaTAI:
> A surprisingly frequent objection to self-enhancement is that
> intelligence, when defined as "the ability to increase
> intelligence", is a circular definition - one which would, they say,
> result in a sterile and uninteresting AI. Even if this were the
> definition (it isn't), and the definition were circular (it wouldn't
> be), the cycle could be broken simply by grounding the definition in
> chess-playing ability or some similar test of ability. However,
> intelligence is not defined as the ability to increase intelligence;
> that is simply the form of intelligent behavior we are most
> interested in. Intelligence is not defined at all. What
> intelligence is, if you look at a human, is more than a hundred
> cytoarchitecturally (2) distinct areas of the brain, all of which
> work together to create intelligence. Intelligence is, in short,
> modular, and the tasks performed by individual modules are different
> in kind from the nature of the overall intelligence. If the overall
> intelligence can turn around and look at a module as an isolated
> process, it can make clearly defined performance improvements -
> improvements that eventually sum up to improved overall intelligence
> - without ever confronting the circular problem of "making myself
> more intelligent". Intelligence, from a design perspective, is a
> goal with many, many subgoals. An intelligence seeking the goal of
> improved intelligence does not confront "improved intelligence" as a
> naked fact, but a very rich and complicated fact adorned with less
> complicated subgoals.
The idea here is that a "code" cortex COULD get a handle on what it
was trying to work on by virtue of the fact that we've hand coded in a
lot of intelligence, unlike a good coffee maker with no clear idea as
to what constitutes "good coffee."
Part of what's easy to react strongly against in Eliezer's account is
the tendency to assume that, like every other self-proclaimed
visionary in AI, he's found the one simple thing you have to do in
order to get AI ("all you need is a code cortex..."). He hasn't. If
anything, he's found the many hard things you have to do in order to
solve the hardest questions we've ever asked. Sometimes I wonder if
his idea isn't just the claim that, once we hand code an AI,
transhuman AI will be the easy part. (Of course, even this is a claim
which can be strongly doubted.)
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:27 MDT