Intelligence scaling (was Re: Posthuman Language)

From: Robert J. Bradbury (
Date: Tue Nov 27 2001 - 23:10:56 MST

On Wed, 28 Nov 2001, Damien Broderick wrote:

> And even if a posthuman utterance of stupefying complexity and depth were
> translatable (by the posthuman, of course) portion-by-portion into English,
> it seems to me perfectly possible that no human would be able to construe
> the utterance. We might not be able to hold its several assertions or moves
> simultaneously in our heads long enough, or at a sufficient level of
> comprehension, to unpack meaning intended by the posthuman (supposing that
> implies *super*human).

This is a key point. At the times when I'm struggling to come to grips
with something that Eric or Robert Freitas has written, I really feel
this is true. The fact that it has taken ~20 years for more than
a handful of serious scientists to catch up with Eric's insights
(or 40 years to catch up with Feynman's) *really* points out the
degree to which standing just a little bit above the crowd allows
you to see forever.

More importantly, we now seem to have some empirical evidence that
intelligence is not a simple linear scale. A couple of British
mathematicians seem to have shown that a relatively small increase
in "capacity" buys a *lot* in terms of "effective" intelligence.

Here is a *really* poor news summary from Wired News:,1282,48576,00.html

One can hope that someone more qualified to comment on the
results will publish something a bit clearer about the methods.
(Perhaps someone can find something in the preprint archives...)

But it tends to support Damien's point. I would propose a corollary --
If you cannot hold the enormity of the problem in ones consciousness
then developing effective solutions is *really* difficult.

Some examples that come to mind are the 4-color Map Theorem,
Fermat's Last Theorem, and of course what would be the most
extropic U.S. Foreign Policy.


This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:22 MDT