Re: >H soft incremental uploads

Eugene Leitl (
Wed, 14 Aug 1996 10:30:09 +0200 (MET DST)

On Mon, 12 Aug 1996, Eric Watt Forste wrote:

> At 8:44 AM 8/12/96, Eugene Leitl wrote:
> > [ medical costs exploding ]
> I don't know about other countries, but I know that in the United States,
> over 50% of dollars paid to the medical industry are paid by the Federal
> government. I suspect this (and certain inefficient regulations) has a

Throughout the Europe this percentile is likely to be vastly larger than
50 %. E.g. in Germany there is a variety of state and private (the
latter having entered the scene only quite recently) health insurance
companies. Both the costs of medical treatment and the amount of
nonpaying (state-sponsored) patients have risen dramatically. Health
insurance outfits now tend to step up their payment demands several times
a year. It seems as if either the average quality of medical service
will go down or the whole system is to collapse, leaving behind but a
husk of former price/performance ratio. (At least it will be an autark
and sustainable medicine).

> great deal to do with escalating costs.

But a goodish fraction is caused by the advent of the costly apparative
medicine. As I am writing this, a 5 MDEM Siemens MRI tomograph is
operating the room next door. It needs technicians to operate it as well
as some specialized M.D.'d to interpret the pretty pictures it generates.
Virtually everybody who has health insurance can do a brain scan if he so
wishes -- as associated costs are enormous we obviously have a problem

> > [ semiconductor IT system constraints ]
> Techniques for manufacturing new designs of computing hardware that can
> instantiate neural and fuzzy computational processes are developing *much*

I am tracking their progress as accurately as I can. It is not very
impressive so far. All of these new architectures are implemented in
semiconductor photolitho technology, a fundamentally 2d technique. Since
die/wafer size is limited as well as integration density
as well as top switching speed, we can estimate the top performance limit
of this technology. Though it may appear enormous by today's standards,
it is still a far cry of what biological systems can do. We're bound to
hit a saturation (it can be seen on some plots already) quite soon. No
alternative techniques -- e.g. molecular circuitry -- will be ready by
then, as their development takes a minimum latency. Currently, almost no
research towards molecular circuitry is being done. We are likely to
enter a zone of slow progress, which is likely to delay Singularity
(insert usual disclaimer here).

> more rapidly than our understanding of the brain. I expect that by the time

Au contraire. Neuroscience, augmented by synergistic insights from a
large number other sciences, grows in leaps and bounds. Subjectively, it
is one of the most dynamic sciences I am aware of. Computational physics
is a stagnant pool in comparison to neurosci.

> we develop a theoretical understanding of the nonlinear computational
> processes evolved within large unsupervised feedback nets, the hardware
> basis for cognitive prostheses will likely be in place.

But it won't be based on silicon semiconductor technology. Since there
isn't any other (unless I failed to notice strong nanotech having become
available already). We'll need drastically different hardware to
interface to the brain constructively.

> Or perhaps a more rhetorically effective way to put that: I don't see much
> reason to think that cognitive science (which is currently stuck in a
> technologically fruitful zone which has very little to do with developing a
> theoretical understanding of the way the human brain actually computes)

But don't you see, it is all in the details! If you know both the physics
and have an exhausting knowledge of the structures, the insights you gain
are solely limited by the available modeling power. If you know how the
thing works at an abstract level, then you can build a surrogate, an
ersatz neuron.

There cannot be any theoretical understanding -- unless the theory
contains lots of messy wet data, which automatically disqualifies it as a
good theory, even an applicable theory. It's just a software package, not
a theory.

> will *overtake* the engineering of fully-parallel computing hardware, which
> is currently rocketing along at a merry pace.

Apart from quantum dot 3d arrays, which is but a lab curiousity (no
arrays yet, just dots), I have failed to notice the advent of maspar
machines in the laboratory or the 'real' world. If one comes to
think of it, especially in the real world. Alas, one cannot teach old
programmers new tricks. Remember what happened to Danny Hillis and Thinking
Machines? (I know they are alive again/still).

> And we're going to need both these pieces of the puzzle in place before we
> can start doing serious work on soft incremental uploads.

We'll need lots of computational horsepower, agreed. To make sense from
wet data.

> [ Mathematica neuroemplant ]
> But I suspect we'll need an understanding of the computational processes in
> the brain before we can start hooking any *new* functionality into it, and

All you need to utilize a computer algebra package, is a stream of
chars. Low input bandwidth. To make sense of the data, a visual input is
sufficient. It's hight bandwidth, but it's just rendering. Interfacing a
computer algebra package in this way is almost trivial. It's just one
step after VR. Generating input vectors at will is the only way to make
VR fly.

> once we have that understanding, we'll probably use it to build some of the
> new functionality. Granted that this understanding is nowhere in sight at
> the moment.

Hooking low-bandwidth motorics and sensorics into the brain is
comparatively easy. We can do the motorics even now. Sensorics is much
harder, but eventually we will conquer it. Has nothing to do with soft
uploads, this is meek interfacing.

> >If this fine-grain incremental upload is infeasible, merely beckoning to
> >the surviving set of strange attractors constituting your personality
> >with virgin circuitry will depend on their evolution-shaped propensity to
> >cross over into new territory. Since this never happens in real life (though
> >lesion tolerance & recovery is based on semi-dormant, redundant circuitry
> >taking over the part of vanquished one), it is not likely to work.
> Strange. Your parenthesis, which I fully agree with, seems to completely
> contradict your main statement. I don't understand why you don't think that

I like self-ref contradictions. (No, I don't, actually).

> "semi-dormant, redundant" silicon circuitry could not similarly be
> automatically brought into play by the brain as its original tissues fail.

It's not distributed, it is a solid macroscopic lump with sharp boundary.
To insert it, you must carve out a complementary block of tissue. The brain
is somewhat holographic, but there are limits to it. Brain salami uploading?
Uh, thanks, I think it better gotta be Dewar time.

What's easy for native neurotissue, is damn hard for us. Hardware is
bottleneck, again.

> Of course we don't yet understand the processes by which the brain replaces
> damaged tissues by bringing new synapses and neurons into play, retraining
> them to do the lost functionality, but we certainly do know that it
> happens. That sounds like an existence proof to me.

1 um mobile nanoagents probably could just dig it. With semiconductor
technology, never.

> >> "identity" has gradually moved "into" the silicon prostheses, because
> >
> > [ liquid identity ]
> I doubt that identity has any properties at all that we can firmly identify
> at this point. So I counter by saying that I doubt that identity has the

Aw, you know how I meant it. Spatiotemporal acitivity is done by
circuitry, and this circuitry is localized. Cut a wedge from the
circuitry -- the hologramme has grown fuzzier (if you're lucky). The
piece of cake has to be substituted. We don't know the constitution of
the slice we discarded, so we have to rely on that the surrogate is
virtually indistinguishable to a chunk of neural circuitry. On _all_
functional scales, whether chemical or electrical activity, or whatever
else. Easy, eh?

> properties of a solid, either. Some facts of the universe are just facts of
> the universe, and one of them is that computational processes in a neural
> network are generally distributed throughout the whole network, and that
> neural networks, as they learn, build computational structures within
> whatever hardware they're connected to in the right way. Evolution just

One can't separate the network from the hardware. The hardware defines
the properties of the network. If it prevents mind from spilling over
into fresh circuitry, there is nothing you can do to goad it into it.

Some NNs do that, yes, if the hardware (which defines the properties of
the NN) attached is the right one.

> happened to hit upon the trick of neural networks as a solution for
> computational problems faced by animals. That solution may have plenty of

Not only by the animals -- by any entity which has to operate in the real
world. There's a tight limit to what a sequential machine can do (fuck
Turing), and maspar machines converge towards NNs with increasing
parallelility, to become indistinguishable from them at the far end.

> properties that aren't selected for, if those properties are inseparable
> from the properties that *are* selected for (like being able to make fast

I dunno, I think the fitness function is additive (it's a mental
picture, ok?). One factor demands for fast, robust, power-saving
complex calculations, the other for the tolerancy towards lesions. Both
are unrelated. Since fitness selects for both traits, we have both
competency slots covered. (There are lots of others).

> learned nonlinear transformations of complex time-varying stimuli into
> complex time-varying motor activity, and learning those transformations
> without supervision).

Yeah, but I doubt this includes lesion recovery. Lesion tolerance, yes,
but not lesion recovery. That's an extra.

> >Alas, I have grown to distrust this Proteus guy (... you never know who he'll
> >be next ;). Most neuroscientists I know are highly sceptical about
> >unbound malleability of cerebral circuitry.
> Cognitive science is a highly interdisciplinary field, and most
> psychologists I know who understand anything about computation are firmly
> convinced of the plasticity of mind. I said nothing about "unbound". It's

Well, you demand for a set of pretty advanced properties. The epithet
'unbound' is hardly inapproriate, methinks.

> important to distinguish between computational processes and the
> instantiating hardware, even though this is devilishly difficult to do in
> the case of the human brain, just as it is with the genetic apparatus.

I would be mad to equal circuitry and its function. If I did, the whole
idea of uploads, whether hard or soft, whether continuous or discontinous
would become laughable. Nevertheless, the implementation
of function demanded, shapes the outlines of the hardware which has to
deliver this performance. (Connectivity, and such).

> >If incremental soft uploads are
> >possible at all, they won't come before Singularity -- and that will be too
> >late. Contrary to Vinge's headbands, I doubt we'll see much of it prior to
> >Singularity, when every single rule is likely to be broken -- including
> >the necessity to rely on headbands (or heads).
> I don't understand this kind of thinking at all. Clearly, whatever
> technologies we seek to rely on to get what we want are going to have to
> evolve gradually. They are not going to magically appear out of an unfunded

Yes, but consider the speed of the evolution on the far end of the
exponential function. We ain't seen nothing yet. The biggest progress
will come from IT & network connectivity. This means vastly decreased
latency and boosted productivity. Add a thimbleful of agents to automate
the menial tasks, and, voila! look, no hands, mom, the Singularity is

> void. There are strong incentives for the development of cognitive
> prostheses; on the other hand, there's practically *no* economic pressure

Demand there might be (I would buy such thing if I could afford it) it's
just that physics/biology seems to forbid it.

> for the development of "hard, sudden uploads"... in fact, I can't see any
> pressure for this at all outside of the cryonics community. But of course I

How so? There is both a strong push for scanning the shape and
connectivity coming from the discipline of microscopy, an area of pure
science, as well as a strong demand for building autonomous insectoid
agents from industry and mil. E.g. scanning the visual system of the
musca domestica, and then building the hardware to make it fly, would
give us the deadly pilotless fighter Pentagon is currently trying to

Cryonics does not count at all, it is by far not influential enough. Too
outlandish a concept for the majority, alas.

> want to see research move forward on every front, and I certainly *do* want
> to see those patients in suspension restored to health.

Yeah, so do I.

> Eric Watt Forste <>

P.S. Btw, I spoke with a M.D. who is currently doing his Ph.D. in MRI
imaging. He showed me a functional MRI image of a 19 year old male,
who had suffered a brain hemorrhage at birth. There is lots of
compensatory activity, but despite all those years in between, his
hand movements is still inhibited. Our hardware is amazingly
flexible, yet there are limits to it.

| | cryonics, nanotechnology, |
| | >H transhumanism, [...] |
| | "deus ex machina, v.0.0.alpha"|
| | >H: "alpha-->omega" |