Ramez Naam <ramezn@EXCHANGE.MICROSOFT.com> On Sun, 12 Oct 1997 Wrote:
>Speed Constraints: Barring some revolution to Relativity, c limits
>the rate of expansion of even a Vingean Power. In addition it limits
>the speed of signaling within that entity's "brain".
Not very constraining. The fastest signals in a biological brain move at
about 100 meters a second, many are much slower. Light move at 300,000,000
meters a second and Nanotechnology can make things much smaller than a neuron.
One second for a mind a billion times faster than ours would be like 30 years
>Chaotic Computability Constraints: The most ambitious nanotech
>scenarios posit universal assemblers that can be programmed or
>designed to build specific structures.
>IMHO this is a clearly chaotic system.
Why? Life can assemble matter into very complex structures and I see nothing
about it less chaotic than Nanotechnology, rather the reverse.
>The structure of any complex object created by this sort of nanotech
>would seem to display an exquisite sensitivity to tiny variations in
>design of the assembler
No more sensitive than the enzymes and ribosomes that built you and me.
>or the assembler "software"
No more sensitive than our DNA.
>and possibly to the local environment itself.
Just because a process is complex does not prove it is chaotic. Proteins
start out as a linear sequence of amino acids that can't do much, but then,
in a manner too complex for any computer to simulate, they fold up into
intricate shapes and that gives then their power. Despite their staggering
complexity these shapes are repeatable. If the shape of proteins were
exquisite sensitivity to tiny variations in the environment then they would
never take the same shape twice and life would be impossible.
>I'm very interested in thoughts from the list on:
>Where a Spike subject to these constraints will begin to level off.
Perhaps someday the rate of change will level off, but I'm just guessing and
that's all I can do when I try to figure out what a mind a billion times as
powerful as my own will come up with.
>Will quantum computing have any benefit here?
If you wanted to change the universe and could make a quantum computer of a
few dozen Qbits then you wouldn't have to wait for anything as prosaic as
Nanotechnology . The reason is that when a conventional 64 bit single
processor computer performs an operation, it does it on ONE 64 bit number at
a time. When a 64 Qbit single processor QUANTUM computer performs an
operation, it does it on ALL 64 bit numbers at the same time, all 2^64 of
them, more than a billion billion, and any increase in the number of qubits
the computer can handle will increase it's already astronomical power
>My immediate question was whether Lent proposes that we build
>traditional logic gates using this technique [quantum cellular
>automation] to replace transistors, or whether he's proposing an
>entirely new type of computer architecture based on QCAs.
I think he was talking about an new computer architecture.
>Given Planck space, c, and the maximum density matter can achieve
>before collapsing into a black hole, what is the maximum achievable
>computational power per unit volume?
It's hard to see how to think about a volume less than the Plank length
across, or do anything in less than the plank time. The reason is that as
the wavelength of light gets smaller the energy gets larger, and so does the
mass, remember E =MC^2 so M= E/c^2. Thus at some point the wavelength is so
small and the mass is so great that a mini Black Hole is formed with a
singularity at its center. The energy of the photon where this happens is
the Plank Energy, it is E= [hc^5/G]^1/2 = 1.22 *10^28 eV where h is the
Plank constant divided by 2 PI, c is the speed of light, and G is the
According to Heisenberg it is possible to get something for nothing, even in
a vacuum you can borrow energy, but the more you borrow the shorter you can
keep it. The amount of energy and the length of time you can detect its
existence is related in the same way as velocity and position, the more you
have of one the less you have of the other. In this way a vacuum is full of
virtual particles, but the Plank Energy is a LOT of energy so you can't keep
it for long.
The Plank Time is t = [Gh/c^5]^1/2 = 5.38 * 10^-44 second , after this the
mini Black hole evaporates by Hawking radiation. The Plank Length is
10^-33 cm, the distance light can travel in the plank time.
This is a short distance, very short. If the universe was a thousand times as
large as what we now observe it, then this super universe would be to a
proton as a proton is to the Plank Length.
Now that's small, but what if we wanted to go even smaller? The singularity
in the center of the mini black hole is a place of infinite spacetime
curvature, its a true mathematical singularity, so that means our current
laws of physics can't tell us what is going on when things get that little,
to do that we need a theory of quantum gravity and we don't have that yet.
On Mon, 13 Oct 1997 "Eliezer S. Yudkowsky" <firstname.lastname@example.org> Wrote:
>What about negative matter? You can have an arbitary amount of
>computing material in a given volume, with net mass zero.
Antimatter is a dime a dozen but it wouldn't help us because to gravity it's
the same as matter. I assume you're talking about anti mass and that's far
more exotic, but things may not be completely hopeless, we can already make
it, or something like it. If you place two large flat mirrors very close
together then, unlike the vacuum on the outside of the mirrors, there can
not be virtual photons of every wavelength in the vacuum between the mirrors
because some will interfere destructively. There are more virtual particles
in the vacuum outside the mirrors pushing them together than between the
mirrors pushing them apart. This force is not strong but recently it has
actually been measured in the lab. 2 mirrors really do want to come together.
If you define a vacuum far from any object as containing zero energy and thus
mass, and if it is able to push the mirrors together because the space
between the mirrors has less stuff in it, then that space between mirrors
must have a negative mass.
According to Kip Thorne a wormhole, like one made from a black hole, could
not be used as a time machine or let you travel instantly to a distant galaxy
because they are inherently unstable, the very act of entering one would
destroy it. To stabilize a wormhole and make it useful you'd need something
that pushes the wormholes walls apart gravitationally, you'd need something
with a negative energy density, you'd need antimass.
>There exist physical processes which are not simulable-to-arbitrary
>accuracy by Turing machines.
That's true of randomness and also of those weird instantaneous correlations
quantum mechanics can sometime produce, but other than that a Turing machine
can simulate any physical process, although not necessarily in polynomial time.
>Even if all physical processes *are* simulable, they still use real
>numbers. Perhaps a clever Being could exploit a chaotic
>Mandelbrot-like boundary to perform calculations of arbitrary
>complexity in constant time.
Well..., there are strange attractors and there are a hell of a lot of them.
I don't know if it's been proven or not but I doubt if they could be put into
a one to one correspondence with the real numbers, unfortunately I don't see
how you could use this in computing. The shape of the attractor is known only
approximately, in most cases they are infinitely complex fractals. OK ,
you run your computer it gives you a point in phase space, now what?
That point could lie in an infinite number of attractors, which one is
correct? Run the machine again, now you have 2 points, you have eliminated
an infinite number of attractors but unfortunately there are still an
infinite number left. To find the exact and not just approximate attractor
you must run the machine an infinite number of times and if you already
could do that then no point in bothering with the Mandelbrot boundary.
John K Clark email@example.com
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----