Re: Intelligence, IE, and SI - Part 3

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 03 Feb 1999 15:48:33 -0600

> Unfortunately, this sort of improvement also tends to be self-limiting. In
> any given problem domain there comes a time when all of the known methods
> for improving performance have been applied, and there are no more obvious
> improvements that can be made. Then you are reduced to inventing new
> solutions, which is a process of scientific discovery that requires large
> amounts of effort and produces only erratic results.

I'd just like to remind everyone that this is an open issue. My claim has always been that, to our knowledge, this applies _only_ to a constant intelligence - a constant intelligence, repeatedly examining the same problem, _may_ eventually run out of steam. Over a wider range of history, a pattern of breakthroughs and bottlenecks is apparent - running out of steam is only temporary, until a new stimulus breaks the deadlock. Again, this is for constant intelligence.

If the domain is the improving intelligence itself, the problem may never arise. Any increment in intelligence, achieved by applying the current problem-solving methods, may give rise to new problem-solving methods. I can't know that for an AI trajectory, of course. My knowledge is limited to a short range around current levels of intelligence, and the behavior exhibited here is that most major bottlenecks yield to a very slight intelligence enhancement. I assume, for lack of a better-supported alternative, that this behavior is universal along the trajectory.

Few here would care to speculate about ultimate limits that will apply not merely to our first crude AIs but to the utmost efforts of galactic civilizations a billion years hence. What is any temporary bottleneck on such a scale? With time, and advancement, such things will crumble - or so we believe. But an intelligence enhancement is just as poisonous to bottlenecks as a million years of plugging away.

Our history has trained us to find eternal bottlenecks absurd. Bottlenecks which somehow stand up to enhanced intelligence are just as silly.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.