<continued from part 3>
OK, here's where I start venturing onto thin ice. Given what we now know about intelligence, what can we say about intelligence enhancement? Obviously we don't really know how to build an intelligent mind yet, so we can't make any firm predictions. However, AI has advanced to the point where it seems plausible that a mind could be build using complex combinations of the problem-solving methods that are currently known. If that is indeed the case, then we can anticipate some of the general characteristics of IE:
Assume we have an intelligent entity implemented in software (it could be an AI, and upload, or some sort of hybrid entity - our only constraint is that we assume someone knows enough about the entity to make changes to it). What benefits does it gain if we give it faster hardware to run on?
If we make no changes at all to its software, this will simply make the entity think the same thoughts faster - from its point of view, the rest of the world slows down. However, the task of modifying the entity to take advantage of the increased speed is simple compared to the task of creating the entity in the first place.
Modifying decision-tree abilities to take advantage of the extra speed is trivial - it could even be made automatic for an artificial mind. Modifying data-transformation abilities to work better is an engineering problem of reasonable complexity. Knowledge bases will not be much affected - they simply do the same things faster.
What would this mean in terms of intelligence? Based on current experience, we would expect geometric increases in processing power to yield at least linear improvements in ability for both decision-tree and data-transformation problems. Some abilities will improve even faster, while a few (like playing tic-tak-toe) will become 'solved' problems and stop improving. Learning speed will enjoy an increase somewhere between linear and geometric - the knowledge bases and other infrastructure enjoy a geometric increase in performance, but we would expect real-world constraints (such as network bandwidth, or the time required to perform physical actions) to place some limits on the rate of improvement.
As an interesting note, it would appear that the effort required to take advantage of each speed increase grows relatively slowly - a mind that knows enough to make the necessary changes should have little trouble taking advantage of all available processing power.
A potentially much more powerful approach to intelligence enhancement is optimization - using better heuristics, more sophisticated decision-tree searches, more efficient data-transformation algorithms, and so forth. This approach can sometimes yield amazing results - performance improvements of several orders of magnitude can often be achieved through relatively straightforward optimizations.
Unfortunately, this sort of improvement also tends to be self-limiting. In any given problem domain there comes a time when all of the known methods for improving performance have been applied, and there are no more obvious improvements that can be made. Then you are reduced to inventing new solutions, which is a process of scientific discovery that requires large amounts of effort and produces only erratic results.
Another approach to IE would exploit the inherent flexibility of general-purpose computers by dynamically allocating processing resources. The idea here is to allocate most of the mind's processing power to whatever problem it is concentrating on at the time, rather than allocating a fixed amount of power to each cognitive ability. The result would be a fairly substantial increase in effective intelligence, equivalent to the effect of a substantial speed increase (something like x10 to x100, depending on how much temporary specialization is actually possible for a problem).
An entity with access to large amounts of data storage space can make another interesting tradeoff. As computer scientists have discovered, allowing more time to solve a problem has exactly the same effects as using faster hardware. As long as the entity does not run out of memory, it can take 10, 100 or even 1,000 times longer than normal to think about a problem, and reach solutions that would normally require a higher level of intelligence running on faster hardware. Of course, many problems are too time-sensitive for this approach to be feasible.
Well, I'm sure there are lots of other ways to approach IE, but I haven't thought of them yet. Suggestions are welcome - I would like to include more possibilities in the list.
Billy Brown, MCSE+I