AI big wins (was: Punctuated Equilibrium Theory)

Robin Hanson (
Thu, 24 Sep 1998 09:51:22 -0700

Eliezer S. Yudkowsky writes:
>Well, my other reason for expecting a breakthrough/bottleneck architecture,
>even if there are no big wins, is that there's positive feedback involved,
>which generally turns even a smooth curve steep/flat. And I think my
>expectation about a sharp jump upwards after architectural ability is
>independent of whether my particular designs actually get there or not. In
>common-sense terms, the positive feedback arrives after the AI has the ability
>humans use to design programs.

Let me repeat my call for you to clarify what appears to be a muddled argument. We've had "positive feedback", in the usual sense of the term, for a long time. We've also been able to modify and design AI architectures for a long time. Neither of these considerations obviously suggests a break with history.

>My understanding of the AI Stereotype is that the youngster only has a single
>great paradigm, and is loath to abandon it. I've got whole toolboxes full ...

I think you're mistaken - lots of those cocky youngsters have full toolboxes. ("Yup, mosta gunslingers get kilt before winter - but they mosta got only one gun, and looky how many guns I got!")

Robin Hanson RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614