Intelligence, IE, and SI - Part 1

Billy Brown (
Wed, 27 Jan 1999 15:50:09 -0600

Since there seems to be a fair amount of interest in the topic, I've decided to go ahead and post what I've got so far (in sections, of course). The final version of all of this will (hopefully) end up as a paper-length treatment of the topic, but it has a ways to go before its ready for that.

I'll start here with some initial groundwork on the nature of intelligence, both human and superhuman. The next section will develop these ideas into a theory of intelligence enhancement, and the third will explore some implications of this theory for AI/SI/IE.

So, here's part I:

Before we can take up the task of examining the prospects for intelligence enhancement, let alone predict the abilities of an SI, we need some agreement about what we are talking about. We need to make a few reasonable assumptions about the nature of mind, and define what we mean by

The first assumption we must make is simple, reasonably, and highly controversial: that sentient minds arise from complex computational processes implemented by physical matter. They require no nonphysical
'thought substance', no incomprehensible quantum magic, and no supernatural
forces. If this assumption fails, all of the following analysis is meaningless.

We will also assume, for the sake of argument, that there is no invisible, hard constraint on the maximum complexity of a mind. The only limits to the development of minds with vast amounts of processing power are those of the physical world: the speed of light, the scale of matter, and so forth. We will not approach those limits in this analysis.

As for the nature of intelligence, let us remember that we are more concerned here with external effects than subjective experience. It does not matter whether a particular entity is 'really' sentient, or 'really' intelligent in some abstract, intangible sense. Instead, what we care about is what the entity is capable of doing - what kinds of problems can it solve, and how well?

Now, intelligence in this sense is not a unitary entity. It is perfectly possible to be good at solving one class of problems, and do a poor job of solving others. There is a very large (possibly infinite) number of different problem domains in which an intelligent entity might have some ability. Some of these domains are related to each other, such that a high level of ability in one domain can be used to solve problems in the related domains. Other problem domains are independent of each other, and require completely different problem-solving approaches.

What, then, defines 'human-equivalent' intelligence? Humans normally have a certain minimal level of competence at solving a wide variety of problems, from sensory processing to social interaction to planning and logical thought. However, we do not declare someone who is deficient in one problem domain to be sub-human. In fact, we often do not consider even large impairments to indicate low intelligence - witness the stereotype of the technically brilliant but socially inept genius.

The answer to this apparent contradiction is also the reason why so many people think of intelligence as a single ability. Picture a graph with different cognitive abilities along the X-axis, and increasing ability along the Y-axis, so that the graph depicts varying ability levels in different problem domains. What we commonly call 'intelligence' is the area under this curve, with various distortions based on our own ideas about which abilities are important. Different humans may have different levels of ability in each problem domain, but we expect the total area under the curve to fall within a given range.

We can therefore say that an entity has human-equivalent intelligence if it meets the same criterion: its total ability in the relevant problem domains must fall within the same range as that of humans. A transhuman entity would be one whose total ability falls well beyond the human range, and an SI would be an entity with an astronomically large total ability.

Note, however, that there is no requirement that even an SI possess normal human ability in every possible problem domain. One could imagine an
'idiot-savant' entity with high levels of competence in some areas, and
little or no ability in others, just as occurs in humans.

<to be continued>

Billy Brown, MCSE+I