RE: Intelligence, IE, and SI - Part 3

Billy Brown (bbrown@conemsco.com)
Thu, 4 Feb 1999 08:40:49 -0600

Eliezer S. Yudkowsky wrote:
> I'd just like to remind everyone that this is an open issue. My claim
> has always been that, to our knowledge, this applies _only_ to a
> constant intelligence - a constant intelligence, repeatedly examining
> the same problem, _may_ eventually run out of steam. Over a wider range
> of history, a pattern of breakthroughs and bottlenecks is apparent -
> running out of steam is only temporary, until a new stimulus
> breaks the deadlock. Again, this is for constant intelligence.

Well, yes. My comments in the previous post really only apply to human-level intelligence - I was trying to stick with claims we can actually back up with experimental data.

> If the domain is the improving intelligence itself, the problem may
> never arise. Any increment in intelligence, achieved by applying the
> current problem-solving methods, may give rise to new problem-solving
> methods. I can't know that for an AI trajectory, of course. My
> knowledge is limited to a short range around current levels of
> intelligence, and the behavior exhibited here is that most major
> bottlenecks yield to a very slight intelligence enhancement. I assume,
> for lack of a better-supported alternative, that this behavior is
> universal along the trajectory.

Well, if we're going to speculate I'd suggest a slightly more complex picture. For an entity with a fixed intelligence, working to improve problem-solving techniques in a fixed domain, the effort required seems to follow an exponential curve. You get lots of big wins very easily in the beginning, then the going starts to get tough, and eventually you reach a point where any further improvement seems impossible.

However, this simply means that a particular approach has been exhausted. If you have a whole population of entities with that same fixed intelligence level, one of them will eventually find a way to reformulate the problem in completely different terms. Then you start another exponential improvement curve. This is where Robin Hanson's idea of IE's 'low-hanging fruit' breaks down - it ignores the fact that breakthroughs are not simply a matter of throwing more effort at the problem. Finding a better way to approach the problem is a matter of insight, not of number-crunching.

For an entity capable of improving its own cognitive abilities, this effect might or might not yield fast open-ended progress. IMO, the key variable is the entity's initial set of abilities. For a human it would still be a hopeless task - I could spend decades optimizing a single ability, and have little to show for the effort. For an algorithmic AI the problem should be much more tractable, for reasons you've already outlined.

> Few here would care to speculate about ultimate limits that will apply
> not merely to our first crude AIs but to the utmost efforts of galactic
> civilizations a billion years hence. What is any temporary bottleneck
> on such a scale? With time, and advancement, such things will crumble -
> or so we believe. But an intelligence enhancement is just as poisonous
> to bottlenecks as a million years of plugging away.
>
> Our history has trained us to find eternal bottlenecks absurd.
> Bottlenecks which somehow stand up to enhanced intelligence
> are just as silly.

True enough - the only real question is whether the first sentient AIs have to wait for superhuman hardware speeds before they can Transcend. However, I don't really expect anyone who hasn't done AI programming to see that without a lot of argument. The human mind just doesn't seem to deal well with this kind of claim - any functional memetic filter tends to reject it out of hand. That's why I went back to the basics on this thread, in hopes of laying out enough of the underlying assumptions to show that the scenario isn't just a 'Hollywood meme'.

Billy Brown, MCSE+I
bbrown@conemsco.com