Re: The Joys Of Flesh

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Fri, 6 Sep 1996 13:14:34 +0200 (MET DST)


On Thu, 5 Sep 1996, John K Clark wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
>
> Thu, 5 Sep 1996 Eugene Leitl <Eugene.Leitl@lrz.uni-muenchen.de>
> [ von Neumann hardware no target for uploading ]
> The term "Von Neumann machine" can have 2 unrelated meanings, a computer with
> a Von Neumann architecture or a machine that can duplicate itself from raw
> matter.

I meant the computer architecture. I believe the macroscopic
autoreplicators are most commonly called as von Neumann probes (von
Neumann has designed an abstract autoreplicator, inventing the cellular
automaton (CA) computation paradigm in the process. The original
automaton was huge, counting some 100 k cells, currently tiny
autoreplicators loops have been invented, consisting from just a few cells.
An animation of this can be found online somewhere, a few edges off

http://alife.santafe.edu

).

Btw, a house-sized structure with _massive_ liquid cooling built from
current-to-near future WSI circuitry, burning several 100 MW would equal to
a human equivalent. Such a structure could in theory be built automatically,
e.g. on the surface of Luna by a von Neumann probe. It would be _not_
a von Neumann machine, though, apart from drastic power and volume demands,
which render such machines uneconomical, but for transient (SI bootstrap)
purposes.

> [ volume of a human equivalent, if molecular circuitry platformed ]
>
> Using the figure Hans Moravec came up with, 10^13 calculations per second to

Alas, if your refer to the estimations in the retina chapter of "Mind
Childeren", then Moravec is cheating. He made lots of arbitrary
assumptions about neural circuitry collapsibility, instead of utilizing a
solid, not even pessimistic estimate. I wrote a short, pretty uncivil
critique about how realistic his estimates are, unfortunately the
transhuman archive has been nuked, so I can't give you an URL.

Considering about 1 MBit equivalent for storage for a single neuron, 8
bits/synapse, a 10 k average connectivity and about 1 kEvents/synapse,
assuming about 100*10^9 neurons, the computed cps values are somewhat
drastic. Moreover, this is _nonlocal_ MOPS (since we're at meaningless
operations per second), and all performance estimates which plot linearly
on a logarithmic scale are assuming on-die (regs & prim. cache) accesses.
Drop this constraint, and you'll see a saturation. The only thing which
truly goes up exponentially, is the amount of transistors on one die, but
even this won't last for long (unless we abandon semiconductor photolitho):
maximum die size is limited due to random defect hits, number of dies is
limited due to achievable wafer size and the size of the structures is
limited since you run into quantum effects very soon (apart from enhanced
defect susceptibility for tiny structures).

Taken together, these facts allow us to safely neglect all strong AI
claims on human equivalent by 2030 or somesuch. Vaporware, again.

(Btw, I'm a strong AI proponent, and an uploader. But you just can't
expect funds to flow if you continuously fail to deliver the performance
promised. That be social, not technical engineering, but that be real
nevertheless).

> emulate the human brain, Drexler determined that an uploaded human mind would
> not be very big. Less than an ounce of matter, mostly carbon, and about 15

Though Moravec comes from robotics & Drexler is certainly a great
scientist, they are no specialists in neuroscience. And it shows.

> watts of energy should be enough for an upload, much less if you use
> reversible computing. That's cheating a little because you'd also need a

Afaik, no groundbreaking demonstration of such reversible logics has been
done yet. It bears about that many promise as quantum cryptography and
quantum computers, imo. (Very little, if any).

> cooling system for the nano computer, but even so, it wouldn't use more
> energy than a dim light bulb and would probably weigh less.

I think diamond rod logic is a bit slow (my estimates are based on
molecular circuitry CAM, which switches drastically faster), as well as
based on much too low (Moravec's) estimates.

> If you wanted VR too you'd need a little more power to simulate a rich

AR, not VR. VR is something different. You can buy VR even now, and it
gets cheaper rapidly. See Doom, Quake, whatever. Don't need dedicated
renderer chips anymore, clever algorithmics can render great fakespace.
All we need are cheap trackers, and great displays, e.g. TI's MDM or
retina-writer based to make it an off-shelf technology. 5-10 years from
now this will be found in every U.S. - living room.

> environment, an entire virtual world for the upload, but it wouldn't amount

Basically, you'd need about twice the resources to run you + body + AR
environments, if not more. If you abandon most of the body model, and
settle for an abstracted environment, then drastically less. How much
less, no one can currently tell. However, you'll still need noticeably
more than a being without. Will you be able to pay for this additional
resources? On the really long run?

> to much, because speed would not be an issue. Even if the computer that was
> simulating you and the virtual world was very slow, from your point of view
> it would seem infinitely fast. If the machine had performance problems all

Of course, but I'd rather bang out the maximum of given number of atoms,
seconds and watts I can have. So I'd rather run at superrealtime,
possibly significantly so. 100* seems realistic, 1000* is stretching it.

Nanoists claim 10^6*, which is bogus.

> you'd have to do is have the part of the computer that was simulating you
> slowed down or even stopped, while leaving the part of the computer that
> simulated the rest of the universe running at normal speed. Regardless of how
> many calculations it would take to convince you that the simulation was real
> it could be done instantly, from your point of view. Once the machine was
> caught up, our part of the computer could be carefully restarted till the
> next speed bottleneck.

Yes, but if I supposed to live indefinitely, I'd like to eke out the max
amount of ticks, until I can't use Sol for energy source anymore. Ok, I
can fuse the light elements in the Oort cloud, but what will I do when
even these resources are depleted? (Disclaimer: should transcension prove
to be impossible, etc.).

> >practicablity of strong Drexlerian nanotechnology has not
> >been demonstrated yet.
>
> The practicality of nanotechnology has been demonstrated by life. The

That's why I said "strong Drexlerian nanotechnology" instead of
"nanotechnology".

Of course life's weak wet maspar nanotech, that's what I always say when
nanoists claim drastically enhanced performance. It might be greater,
sufficiently so to wipe all life on Earth in a Gray Goo scenario, yet
insufficiently so to claim orders over orders of magnitude. One tends
always to forget that atoms ain't that little, at least in relation to
most cellular structures.

Should strong Drexlerian nanotech be infeasible, we would be stuck with a
protein autossembling version of it. No big deal, since the fabricated
molecular circuitry is about one, max two orders of magnitude more bulky
than the diamondoid one. Since it's not mechanical, but uses
electronically excited molecules, it should be pretty fast.

> practicability of strong Drexlerian Nanotechnology has not been demonstrated
> for the simply reason that it is not practical, yet. What Drexler has shown is
> that it is an engineering problem not a scientific one, that it, there is no

Wait a moment.

Drexler claims a repetition positioning accuracy of 100 pm.
I'd like to see the quantum calculations of the mechanosynthesis
reaction, allowing to deposit perfect, or nearly perfect diamondoid
lattice. (I know he did them). Then I'd like to see an experimental
confirmation, since QM calculations are unreliable, to say the very least
(just read the CCL list, and look at all the simplifying assumption a QM
run needs. Lots of simulated trash has been produced in this way).

My estimate, that 10, not 100 pm are needed. Just look at a diamondoid
lattice from above, and look at the periode constant. When zigzagged, C-C
bond are a lot shorter. Sterical things.

Then I'd like to know, whether the deflection amplitude calculations were
done on a solid diamond rod, or a real cantilever structure.

Then I'd like to know, whether the amount of defects in the lattice (on
what are his calculations based?) do not decrease mechanical stability,
resulting in a progressive amplification of errors in subsequent clones.

Then I'd like to see the estimate, what a chemisorbed/physisorbed
species, which are extremely mobile & have a very high surface
concentration do to the reactive moiety tip.

Then I'd like to know, whether the mechanosynthesis reaction set is
sufficiently all-purpose to allow catabolics/anabolics of all need
structures.

And that's an ad hoc list, written up by a nonspecialist. Great worms
come in small cans, no?

> known physical reason that would make it impossible.

No known physical reasons, indeed. We want to know, whether a) a given
structure can exist b) whether we can build the first instance of this
structure c) this structure is sufficiently powerful to at least a make
a sufficiently accurate copy of itself.

That's a lot of constraints, and all of them physical. These are greatly
alleviated for a macroscopic replicator, but bite very deep if you start
tweaking at atomic scales.

Soft nanotech is possible, but it does not utilize mechanosynthesis at
such prominent scale. Claiming the problems to be merely engineering, is
not good marketing, imo.

> >Anyway, diamond rod logic is too slow in comparison to
> >molecular switches, which operate on electronically excited
> >molecular states/quantum dot arrays.
>
> Not too slow for an upload. The acoustic speed in a diamond is 1.75 X 10^4
> meters per second, pretty slow compared to the speed if light at 3 X 10^8
> but very fast compared to the signals the brain uses at about 100 meters a
> second or less, sometimes much less. Because they would be so small, diamond
> rod logic would be far faster than any electronic logic circuits we have today,
> and enormously faster than the old steam powered biological brain each of us
> uses today.

An another brazen claim. Diamond rods, being things mechanical, are
deposited once. They represent states be being in different
configurations.

Now neurons are mobile, they _change their physical circuitry_.
Connectivity's changing, weight's are changing. Lacking this flexibilty,
you'll have to simulate this. Now neurons utilize a very high
connectivity, being topologically aligned on a high-dimensional
hypergrid. There are excellent reasons to suspect this connectivity to be
crucial -- so you have to simulate this connectivity. So you can't do it
directly in hardware, you must start sending bits, instead of pushing
rods (because of sterical constraints, mechanical flabbiness of diamond
on large distances, etc.).

So're losing lots of orders of magnitude this way. At least 3 of them,
easily.

> Fast as it is I'm sure we can do better. Drexler uses rod logic because it's
> easier to design than nanometer electronic logic, and if you want to show a
> proof of concept it makes sense to do so as simply as possible.

I know that, he says that much in "Nanosystems". But his estimates of
SI's in a box are based on diamond rod logic, and they're off doubly so
because he assumes diamond rod is drastically better than neuronal
circuitry. (For all that is worth, it _should_ be better, but by not that
much).

Btw, for the record, I am not saying that strong Drexlerian nanotech is
infeasible, only that we just can't tell yet. Assuming anything else,
claiming great performance, but failing to deliver it _soon_ is extremely
damaging to one's credulity, and hence financing. We've all have seen
what happened to strong AI in the 1980's, must nanotechnology funding
suffer such fate?

(I'm aware that nanotechnology is currently mostly self-financed, but
this is likely to change in near future, or so I hope).

> >>Tim_Robbins@aacte.nche.edu
> >>Honestly, am I the only extropian who likes the flesh?
>
> >You might have no choice. You seem to assume that >H level
> >intelligences to be actively benign, leaving you a sufficient part
> >of resources.
>
> I agree with Eugene. If the >H are nice enough to let us live, it will
> probably be in VR. They won't want us using up a lot of resources in the
> "real" world and they would probably be a bit squeamish about letting us fool

Complexity delta from us to them would be about that between us and a
nematode, maybe greater. I just hope, that their evolved cooperation will
be _very_ benign, and they will protect us from lesser, but more vicious
inhabitants of the digital reality.

> around at that level of reality, like letting a monkey run around in an
> operating room. They'll probably want a firewall to protect themselves from
> our stupidity.

I think they will use drastically different interfaces, not fakespace
renderers, even higher-dimensional. A Dyson Sphere computer might well be
just one vast, complex entity, where a module hierarchy exists, where the
lowest level would be a human, not a neuron. (Probably, running
BorgOSv456546546.3453.34524112 ;)

Alas, we just can't tell what the future will bring. The only sure thing
about the future is -- it will be beyond our wildest dreams.

>
> John K Clark johnkc@well.com
> [ pgp sig snipped ]
_________________________________________________________________________________
| mailto: ui22204@sunmail.lrz-muenchen.de | transhumanism >H, cryonics, |
| mailto: Eugene.Leitl@uni-muenchen.de | nanotechnology, etc. etc. |
| mailto: c438@org.chemie.uni-muenchen.de | "deus ex machina, v.0.0.alpha" |
| icbmto: N 48 10'07'' E 011 33'53'' | http://www.lrz-muenchen.de/~ui22204 |