Infinite boredom? (was: >H Re: The Desirability of Immortality)

Nicholas Bostrom (
Tue, 21 Oct 1997 23:39:49 +0000

Mitchell Porter wrote on the Transhuman Mailing List:

> It seems clear that numbers (for
> example) are in infinite supply, and therefore so are facts about
> numbers, but this doesn't demonstrate that there's an infinity of
> *interesting* facts. Perhaps there's only a finite amount of
> "qualitative novelty", followed by "qualitative eternal recurrence".
> Or perhaps there's an infinity of interesting things to discover, but
> they're harder and harder to get to, so eventually one reaches a point
> of diminishing returns: the eons of boring labour, contemplating googols
> of dull but true propositions, simply don't make up for those
> increasingly rare surprises...

And John Clarke commented:

> there is reason to think that with Increasing intelligence, boredom
> will become less a problem not more.

Here is how I look at this problem:

Facts are not interesting in themselves. They are interesting *for*
somebody in a certain situation. There is a considerable correlation,
within some groups of humans, between what sets of facts each finds
interesting -- that's part of the reason why peer reviewed journals
work: what the referees find interesting is more likely to interest
the readership than what the referees don't find interesting. I also
think that John is right about intelligent people tending to be
interested in more things than stupid people. This, however, I take
to be a fact about human psychology, not a neccessary truth about the
essense of what intelligence is; as demonstrated by the existence of
highly intelligent people who don't find anything interesting, for
example because they are depressed.

My guess is that what produces the feeling of intrigue and interest
is activity in some fairly localised center of the brain, perhaps
somewhere in the limbic system. It is known that electrical
stimulation of these areas can produce intense sensations of hunger,
thirst, irreality, remoteness, smallness (everything appears small),
and it doeasn't seem implausible that intellectual interest might
also originate here. What typically triggers this feeling can of
course be some very high-level neocortical processing -- say, a
physicist being excited by a formula because he understands its
significance. But the same feeling could, in principle at least, also
be triggered by means of appropriate electrical stimulation of the
brain or by manipulating the emotive circuitry of the simulated
intellect. In fact, the only reason that certain things naturally
trigger our interest is that evolution has found the link conducive
to fitness on the African savanna.

I therefore see no reason why the present discrimination function by
which humans discriminate between interesting ideas or
facts and uninteresting ones could not be replaced by any other
criterion one might come up with, including the one that *every* idea
is highly interesting.

Nor do I see any grounds for supposing that the present
discrimination function is optimal for life in the modern world, less
for the sort of life a superintelligence would live. "Optimal" should
here be understood in the relative sense, as being conducive to
achieving whatever other goals might be held by the individual.
Instead I would expect that beings in the future, if they have the
technology to do so, will adjust their curiosity to whatever level
made them most efficient in achieving their other goals. If pleasure
is one of these goals, then rather than making themselves inefficient
by being too curios, they could leave their curiosity at the optimal
level and turn on the pleasure directly, without going through the
trouble of first encephalising it and associate it with a thought or
an idea.

These considerations suggest to me that while more intellectual
curiosity, and its objective correlates (such as diversity,
information, novelty etc.) might be very excellent things for people
like us in a world like the one we are living in today, it might
nevertheless be overly anthropocentric to assume that these things
will also play a central role for a superintelligence or a society of
uploaded humans.

Nick Bostrom