"Chunking" is a very strong term, and I'm not sure of its meaning in this
But it would be completely absurd to argue that the human brain does not
have multiple specialized intelligences. Neuroscience shows rather clearly
that different brain regions are specialized for different types of
functions. And basic human experience shows that some people who are
super-smart in math are poor in humanities, and vice versa. And some people
who are very clever at all sorts of intellectual things are much LESS clever
at manipulating social situations. Etc.
One could make a more plausible argument that the need for multiple
specialized components is not a GENERAL property of any intelligent systems,
but only a PARTICULAR property of the human brain and other systems similar
to it. I think this is also badly wrong, but at least it's not contradicted
by vast amounts of direct empirical evidence. It's merely contradicted by
vast amounts of INDIRECT empirical evidence, as well as by common sense.
It's a fact known to anyone who's done practical AI work that a more
specialized approach to a given problem is going to be more efficient than a
more general approach, in almost all cases. Building a real AI thus
requires a very delicate balance between generality and specialization. One
needs, in fact a general intelligence mechanism that can also serve as a
sort of "mind OS" on which numerous specialized intelligence mechanisms can
run. But you've heard this spiel from me before...
> -----Original Message-----
> From: firstname.lastname@example.org
> [mailto:email@example.com]On Behalf Of Eliezer S. Yudkowsky
> Sent: Wednesday, May 09, 2001 3:41 PM
> To: firstname.lastname@example.org
> Subject: Re: Chunking intelligence functions
> PS: I also agree that spatial and linguistic intelligence are
> unproblematic, although linguistic intelligence might well be separable
> into "lexical" (word-memory), "comprehension", and "expression"
> components. All of these are known to be abilities that are supported by
> specific and distinct hardware, such that damage to a hardware module
> results in selective damage of the ability and nothing else.
> -- -- -- -- --
> Eliezer S. Yudkowsky http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT