Re: Mike Perry's work on self-improving AI

Joseph Sterlynne (
Tue, 7 Sep 1999 22:06:34 -0600

> "Matt Gingell" <>
>> Joseph Sterlynne <>

>>Some of this
>>concern has, I think, been incorporated as a major facet of studies of
>>complex systems: we don't know why some systems can defeat local optima,
>>how minimally complex various features must be, or even what the shape of
>>certain multidimensional solution spaces is.

>I donít think there can be any general solution to the problem of local
>There are lots of different useful techniques, but any hill-climbing algorithm
>can potentially get stuck. Itís a question of the topology of the search
>which can be arbitrarily complex

Perhaps, though Rik and Eliezer have offered rebuttals. Whatever the case, I should clarify that my main intention here was not to question movement in search space but to mention the problem of understanding the architecture of complex systems. I mentioned the human system, which is still a little complicated for us; in terms of Perry's design, though, there may be lessons as to what constitutes a flexible enough architecture for seeing around optima. I am thinking of Aaron Sloman's attempts to sketch the salient dimensions of minds by imagining these arrayed in a space. If we knew the details of the human mind we could locate ourselves within this space (and thus, hopefully, identify directions in which to move).

>I think itís interesting to ask why evolution didnít get stuck [. . .].

Oh, I took care of this one long ago. See my 1971 _Wear My Britches, Matt!: The Homo Sapiens Story_ (Oxford University Press).