soft incremental uploads

Eric Watt Forste (arkuat@factory.net)
Wed, 14 Aug 1996 20:35:35 -0700


At 1:30 AM 8/14/96, Eugene Leitl wrote:
>Au contraire. Neuroscience, augmented by synergistic insights from a
>large number other sciences, grows in leaps and bounds. Subjectively, it
>is one of the most dynamic sciences I am aware of. Computational physics
>is a stagnant pool in comparison to neurosci.

I'm glad you think so. Then perhaps you can answer what has become my
favorite question of late: has anyone managed to get artificial
unsupervised feedback networks to do *anything* interesting?

I've only just started poking around, but it seems to me that other than a
few abortive efforts in the late '80s when neural-net research woke up
again, all the work going on has been in supervised nets and feedforward
nets, and that no one has the foggiest idea how unsupervised feedback nets
do their stuff. (I know you know this, Eugene, but for those who are
following this and don't already know, our best current
computational-process-model of the brain is the unsupervised feedback net
model.)

Last time I checked progress was pretty stagnant even with supervised
feedback nets.

So it seems to me that this is a gaping chasm before us in k-space (or
World 3 or whatever you want to call it) and no one is currently working on
bridging it.
They're working *toward* bridging it, by investigating simpler nets and
fuzzy systems, but not yet *on* bridging it. And as far as I know, the best
mathematical models we have for unsupervised feedback nets are ragingly
nonlinear and not useful for much. Please correct me if I'm wrong.

>But don't you see, it is all in the details! If you know both the physics
>and have an exhausting knowledge of the structures, the insights you gain
>are solely limited by the available modeling power. If you know how the
>thing works at an abstract level, then you can build a surrogate, an
>ersatz neuron.

And then simulate 100 billion of them at miniscule tolerances? Sure, maybe
brute force will work. But the history of classical AI is a history of the
failure of brute-force approaches (along with some stunningly fabulous
spinoffs, I must add). My hunch (and it's nothing more than that) is that
we will succeed neither in AI nor in uploading nor in developing
"superintelligence" by some other means until we have a theoretical
understanding of the computational process that we call mind. And while the
progress that has been made has led to an explosion of useful technology,
it hasn't really gotten us any closer to seeing how to cross that
aforementioned gaping chasm. Again, please correct me if I'm wrong.

I understand if you disagree with me because you are confident of the
future possibilities in brute-force simulation, but if you disagree with me
about the path toward a theoretical understanding of what intelligence is,
I'd really like to hear about it.

And I long ago decided that since cryonics is just a time-transport
technology, indefinite human lifespan will require either uploading or
*really* drastic meddling with adult gene-expression. So if any of you out
there are disheartened about the prospects for uploading and also want
indefinite lifespans, I recommend studying molecular biology and embryology
and human genetics and all that.

>Apart from quantum dot 3d arrays, which is but a lab curiousity (no
>arrays yet, just dots), I have failed to notice the advent of maspar
>machines in the laboratory or the 'real' world. If one comes to
>think of it, especially in the real world. Alas, one cannot teach old
>programmers new tricks. Remember what happened to Danny Hillis and Thinking
>Machines? (I know they are alive again/still).

Sure. One of the reasons we can't program them is because we don't yet have
a useful mathematical model of the only existence proof of a
massively-parallel supercomputer, the brain. Some useful hardware advances
happen serendipitously, but I think in this case theory is going to have to
precede technology.

Eric Watt Forste <arkuat@pobox.com> http://www.c2.org/~arkuat/