Eliezer S. Yudkowsky writes:
But where *do* you draw your conclusions from, if not by analogy with
other intelligence growth processes? Saying that "superintelligence is
nothing like anything we've ever known, so my superfast growth estimates
are as well founded as any other" would be a very weak argument. Do you
have any stronger argument?
>Quoting Max More:
> "... I have no doubt that
> human level AI (or computer networked intelligence) will be
> achieved at some point. But to move from this immediately to
> drastically superintelligent thinkers seems to me doubtful."
>...
>the seed AI's power either remains constant or has a definite maximum ...
>efficiency ... defined as ... the levels of intelligence achievable at
>each level of power ... more intelligence makes it possible for the AI
>to better optimize its own code. ...
>the basic hypothesis of seed AI can be described as postulating
>a Transcend Point; a point at which each increment of intelligence yields
>an increase in efficiency that yields an equal or greater increment of
>intelligence, or at any rate an increment that sustains the reaction.
>This behavior of de/di is assumed to carry the seed AI to the Singularity
>Point, where each increment of intelligence yields an increase of efficiency
>and power that yield a reaction-sustaining increment of intelligence.
>It so happens that all humans operate, by and large, at pretty much the
>same level of intelligence. ... the brain doesn't self-enhance, only
>self-optimize a prehuman subsystem. ...
>You can't draw conclusions from one system to the other. The
>genes give rise to an algorithm that optimizes itself and then programs
>the brain according to genetically determined architectures ...
This experience with intelligence growth seems highly relevant to me. First, we see that the effect of smarter creatures being better able to implement any one improvement is counteracted by the fact that one tries the easy big win improvements first. Second, we see that growth is social; it is the whole world economy that is improving together, not any one creature improving itself. Third, we see that easy big win improvements are very rare; growth is mainly due to the accumulation of many small improvements. (Similar lessons come from our experience trying to write AI programs.)
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614