(Moore's Law) Computing power doubles every two years.
(Eliezer's refinement) Computing power doubles every two subjective years.
So once computers reach human equivalence, doubling time will reduce
to one year, then to half a year, and so on, which would lead to
infinite power in four years if there were not some ceiling to
computing speed.
I asked myself how much *subjective* time would elapse between
human-equivalence and maximum speed. If there was no ceiling to
speed, then elapsed time would be infinite, but in practice it's going
to reach some apex (e.g. a millionfold increase).
Suppose we call human processing speed 1 h ("h" for "human"), maximum
processing speed K h, the doubling time in Moore's Law M. By Eliezer's
criterion, subjective time to maximum speed would be M log_2(K).
For K = 1 million (~= 2^20), M = 2 years, this is about forty subjective
years (s-years). For twenty doublings, elapsed real-time would be
1 + 1/2 + .. + 1/(2^19) yrs = 2 - 1/(2^19) yrs, or about a minute short
of two r-years (real years!).
(Incidentally, the way NOT to do this problem is to say (as I did),
"Between human equivalence and maximum speed, speed is a hyperbolic
function of time, so let's integrate a hyperbolic function." The integral
of Sqrt[x^2-a^2] is a mess, as http://www.integrals.com can tell you.)
2) The upper bound on this process would be determined by the limits
of what one can do with atomic matter, which is what we have to hand.
(That millionfold increase is Drexler's guesstimate of the upper bound.)
Perhaps you can go faster using (e.g.) neutronium, but I have no idea how
long it would be before we could actually obtain some (how far to the
nearest neutron star? how long would it take to get there in s-time?).
Once you start manipulating very dense objects, though, there may be
the possibility of using baby universes. If you can spawn a new universe
but retain a traversible wormhole connection to the parent universe, you
can probably create an exponentially proliferating network of universes.
How quickly? The first stars formed about 1 billion years pbb (post Big
Bang); according to the recent discussion here on the far future, the
last stars will be dying out at about 100 trillion years pbb. So if
we assume that it's 10^9 years before you can start spawning again from
inside a new universe, the useful lifetime of a single universe will be
about 10^14 years = 10^5 "generations", certainly long enough to
achieve exponential growth. (If we assume, arbitrarily and perhaps
conservatively, that each universe can spawn a billion others, then
after n generations (n billion years) there will be (10^9n) universes
in our network. The "retirement" of earlier generations makes a
negligible difference to the population increase.)
If we fill all those universes with processors and turn the whole
network into a single big computer, it looks as though we can have
exponentially increasing computer power forever. For the arbitrary
case above, doubling time would be about 30 million years.
But in fact there would be a slowdown, since as the network grows,
the time it takes to send a signal from one side of the network to
the other will increase. Here's why:
Presumably there is a lower limit to the size of a stable wormhole.
In a universe like ours, the volume of space increases at best
polynomially with time, so we can only have a polynomial increase in
the number of wormholes per universe (and thus only a polynomial increase
in the number of universes to which a single universe is directly connected).
But at maximal proliferation rate, the number of universes increases
exponentially. So the longest network path increases exponentially,
and we have an exponential *slowdown* of entities distributed
across the network.
Despite this, it seems logical to me that the final form for a
self-evolving entity which makes it to the stage of intelligent
superobject is an endlessly proliferating network of universe-brains,
connected to avatars in spacetime regions throughout a broader
"multiverse". This would be the final form since it's one which allows
for immortality and unlimited further development without further
upgrades in the basic platform (by which I mean changes like the move
from carbon to silicon, or from atoms to neutronium).
On the other hand, the reason this scenario isn't as well known as the
Dyson and Tipler scenarios (I call it the Linde scenario, after
Andrei Linde, the cosmologist who developed the idea of the
self-reproducing universe) is that it lacks mathematical rigor.
We have (speculative!) mathematical descriptions of the spawning of
baby universes, and of traversible wormholes, but as far as I know
no one has a model for how a baby universe might remain traversibly
connected to its parent. The closest thing I've seen is a remark by
Hawking, that there are particle trajectories which can pass from
our universe into baby universes (but they proceed along imaginary
time, whatever that turns out to mean ontologically). So perhaps
you make a wormhole first, and then send one end on such a trajectory.
3) Michael Nielsen (mnielsen@tangelo.phys.unm.edu) has figured out
something he calls the quantum Moore's Law, describing our ability to
simulate quantum computers. Basically, we can manage one more qubit
every two years. I presume that this is because adding one qubit to a
quantum computer doubles the number of its basis states. (The current
apex of quantum-computer simulation: a 21-qubit computer was simulated
somewhere in South America, and successfully factored 15 using Shor's
algorithm.)
I have no idea what the rate of increase in processor power
would be once you start making *real* quantum computers, as opposed
to simulating them.
-mitch
http://www.thehub.com.au/~mitch