Um, "Algernon's Law" was invented by me, not the (mouse) Algernon. Nor
does it yet occupy the Theory/Hypothesis spectrum; that applies only to
individual Algernic modifications, not Algernon's Guideline in general -
which is aiming for a position in cognitive science similar to that of
TANSTAFFL in politics, or Murphy's Law in engineering. I.e, you can
fool it six ways from Sunday, but you'd damn well better know about it
to begin with. Anyway...
There are three basic ways an intelligence enhancement can be an
evolutionary disadvantage; Galileo-martyrdom, biological limits, or
cognitive tradeoffs.
The first is the Galileo-martyrdom theory of genius. That is, the
intelligence enhancement either (1) causes the person to be lynched by
an angry mob or (2) causes the person to realize there are more
important things in life than reproduction. Although this is a favorite
note of science-fiction authors, many conditions would have to be
satisfied before we could say with confidence that the only downside of
some nootropic was that you'd be SO smart the normals just couldn't take
it any more. Like, the nootropic tampers with the serotonergic systems
and turns you into a megalomaniac genius. Even that wouldn't be enough,
frankly, because it's too easy to ask why this form of genius requries
some random emotional state.
If the nootropic makes a specific chemical change that causes you to
lose interest in reproduction, then that's fine - although, as above,
one would have to show that this was *inherent* *to* *the* *advantage*,
and not just a side effect of the particular chemical used - otherwise,
why hasn't a similar but less devastating drug been invented by
evolution long ago? An example might be an improvement to the
goal-processing system that causes the Algernon to dispassionately and
perfectly evaluate the justification of every goal; such a person might
not be responsive to his evolutionarily tuned, unjustified emotions.
Next come biological limits, such as brain size/hip width. Or neuron
number/metabolic energy consumed. A genuine nootropic would probably
work by bypassing one of these limits. Example: Although this isn't a
pill-type nootropic, it's still "add-to-brain" material, and the most
obvious of all: Extra brains. Clone yourself, take the fetal neural
tissue. (Before it forms a brain, of course! a) ethics b) won't
integrate.) Maybe you'd need to expand the skull somehow, maybe not;
maybe the neurons would integrate, maybe not; it's still worth trying, I
say. And the reason brains haven't grown before is, of course, that the
head would be too big to get out of the birth canal.
Similarly, a nootropic that caused neurons to double in speed (or
plasticity) at the expense of octupling energy consumed would also get
around Algernon's Law. You might need to walk around with a
refrigerator on your head, or you might keel over from your heart
beating twice as fast (until it all adjusted), but modern medical
science could probably see you through it.
Cognitive tradeoffs, however, are the main thrust of Algernon's Law.
Too many old-time science fiction novels had Miraculous Brain
Stimulators that just poured electricity into some section of the brain
and, ta-da, your IQ doubled. This isn't the reason I invented
Algernon's Law, but it is the reason I felt it had to be published: I
was afraid that at some point in the near future, some wannabe
transhuman would stick a wire into his frontal cortex, pour in the
juice, and expect to be turned into a superman. (I did NOT think
Algernon's Law was absolute, most of the workarounds/disproofs I get are
actually pointed out on the Web page, and Algernon's Law is a tool, not
a message of despair. OK?)
Well, as is not surprising, pouring juice into the frontal cortex
actually degrades performance by adding noise. Proof by induction:
They used induction to stimulate currents in the frontal cortex and
performance went down. Part of "Algernon's Law: The Web Page" deals
with where you pour the juice in to actually stimulate some ability.
(Answer: The limbic system - emotions can trigger cognitive abilities
and can be evoked by electrical current.)
So, that being said, what happens when you "stimulate" some ability? My
answer is that this ability consumes more than its fair share of
cognitive resources that would ordinarily be distributed evenly among
many abilities. For the sake of discussion, let's suppose that two
mental abilities use a "search tree" among identical subject material,
but one mental ability works best with a wide, shallow, breadth-first
search, while another works best with a narrow, deep, depth-first
search. If the "stimulation" locks ability A on "always on", ability B
will suffer, because - again by hypothesis - neural formations are not
easily reprogrammable serial computers; a given connective architecture
is either wide and shallow or narrow and deep. Thus, each ability can
prosper only at the expense of the other.
Moreover, any artificial amplification will result in a net evolutionary
disadvantage, in this case for deep mathematical reasons. In search
trees, doubling the resources available results in a small increase in
performance. A tree of depth 10 and breadth 2 consumes 1000 neurons.
Add another thousand neurons and the new tree is only of depth 11 - a
10% increase from a doubling of resource. So say one ability uses a
million-neuron tree of depth 10 and breadth 4, while another uses a
million-neuron tree of depth 5 and breadth 16.
Locking ability A into position at the expense of B - assuming the
entire neural resources of B are devoted to A - will result in a trivial
increase in ability A - around .5 depth in the first case, and a 2.2
increase in breadth for the other.
The basic framework may vary from ability to ability, but the message of
Algernon's Law is the same: "Things are the way they are for a reason;
the human brain was designed by evolution to operate at capacity, and
you can't get huge performance increases by wishing for them, any more
than you can speed up a computer by doubling the wattage. You have to
actually add resources, and then you have to explain why evolution
hasn't added them already."
So - what nootropics might actually work, getting around Algernon's
Law? I see a few:
1) Additional neurons. Either fetal neural tissue transplants or
something that coaxes the nerve tissue into regenerating - though the
latter is suspect. (You might run a 85% risk of brain cancer.)
2) High-speed neurons. Maybe a drug that myelinates all the
unmyelinated neurons, or a complex nanotechnological fix replacing
neurons with transistors and axons with optical fibers.
3) Increased plasticity. A drug might cause the brain to revert to
childhood, temporarily decreasing actual intelligence, but vastly
increasing one's ability to learn. Or the drug might delay puberty
until age 22; I think that would be a wonderful idea. Teenagers, after
all, are the result of having all these physically mature but
not-ready-to-join-society types around.
4) Altered states of consciousness. The brain is full of
evolutionarily advantageous distractions such as boredom, lust, and so
on. A nootropic might remove the ability to be bored - although one
would have to be careful that boredom didn't stimulate some mental
ability. Removing frustration, for example, would be very iffy indeed.
But some high-tech derivative of cocaine, without being addictive or
pleasurable, might still give programmers the ability to work frantic
16-hour days for years on end without even beginning to feel "burned
out". See David Pearce's "The Hedonistic Imperative", my own discussion
of "Iron-Willed Algernons", and a certain hyperoptimistic member of this
list.
5) One might also lock some given ability on high - not to enhance it,
but simply because cognitive research has shown that that is the best
approach to some problem.
6) Finally, there's my original proposal - lock some ability on high,
and if this nukes out half the brain, deal with it. Would you sacrifice
your ability to understand "Godel, Escher, Bach" if it would make you
the world's greatest designer of program architectures? Would you do it
if *someone* with ability of that level was needed for us to make it to
the Singularity? Call 1-800-555-SURE or 1-800-555-NOPE to register your
opinion.
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.