Re: uploading

hal@finney.org
Sat, 26 Jun 1999 11:23:38 -0700

Freeman Craig Presson, <dhr@iname.com>, writes:
> $ ps -ef
> halfinn 0 10834 Jun 08 0:01 univgen
> root 6 5838 Jun 13 0:06 telnetd
> nanogrl 0 5838 Jun 25 0:00 -host8.wsfa.com: nanogrl: aminomodel -c
> nanogrl 6 5838 16:10:53 0:00 -host8.wsfa.com: nanogrl: vworld
> fcp 8 5838 Jun 22 0:00 -fc1-49.netup.cl: gnuchessv200.3
> halfinn 6 36456 Jun 13 0:00 -univgen
> fcp 6 5838 Jun 23 0:00 -159.235.8.45: mediatron -ch Remedial-
> physics
> fcp 8 5838 Jun 23 0:00 -159.235.8.45: mediatron -ch Vorgy
> root 6 5838 11:39:39 0:00 telnetd
> nanogrl 4 5838 Jun 24 0:00 -host9.wsfa.com: mediatron -ch Vorgy
> fcp 2 5838 Jun 23 0:00 -194.224.244.49: make -k univgen.cpp
> anders++ 2 5838 Jun 22 0:00 -cox.com: anders++: STOR univ3.2

I love it! I want to know what this Vorgy thing is that nanogrl is running though...

> You said that right at the end -- our upload host will be the ultimate
> PERSONAL computer; we'll be gravely concerned with its security and
> reliability. We'll also want all the raw power we can get. Processors will be
> cheap, there will be processors everywhere, maybe one per
> neuron/synapse in the neural net part, and way more than we need to run
> the rest of it (assuming a hybrid machine, part NN and part symbolic).

That may be true in some circumstances, but it's not clear what the ultimate architecture will be. There may be tradeoffs or economic pressures which force us to be a little more parsimonious with our processors.

Even where we do have enough resources for an optimally secure OS, the main question remains whether philosophical considerations of how consciousness works will constrain the architectures we adopt. I have a story (which I've told before) about an interesting architecture which illustrates this point.

Years ago I worked in the parallel computing business, making hypercube supercomputers. I was in charge of the OS. One of the groups we were working with was at JPL, and they had their own OS design, which they called Time Warp.

Time Warp ran on a parallel processor that was designed to simulate systems with mostly local interactions but occasional distant effects. I think their contract was something related to Star Wars missile defense.

Parallel processor systems work well with local interactions, but when there is a need for global effects they slow way down. After each step of the calculation, every processor has to stop and wait for any messages to come in from distant processors in the network. Then they can go on to the next step. Most of the time there aren't any such messages, but they have to wait anyway. This ends up running very inefficiently, and it gets worse as the network grows.

The idea of Time Warp was that the processors wouldn't wait. They used a technique called "optimistic execution with rollback". What they would do is to proceed with their calculations on the assumption that there would be no messages from distant processors. This is usually true and so they run very quickly.

The problem is of course that when a message arrives, it is too late. The processors have gone on and calculated what would happen assuming no such message existed.

For example, suppose the processor has calculated up to time step 2108, and here comes a message stamped with time step 2102. We were supposed to handle it then but we have gone too far. What we do is to roll back the processor state to the previous checkpoint, which may have been, say, 2100. From that point the processor can go forward to 2102 and then handle the incoming message, and go on from there. The earlier run from 2102 to 2108 is discarded and has no effect on the rest of the simulation.

There is some wasted work here, but as long as the remote messages are not too frequent, you end up with a more efficient system, which means that it runs faster.

Now, this architecture would not be a bad choice for simulating the brain. Most brain connections are local, but there are some long range neurons. You might do very well to run something like Time Warp and roll back the local state when a message comes in from a distant part of the brain.

The interesting question is what effects this might have on consciousness. We have the "main path" of the brain calculation constantly moving forward, but at the same time there are a number of "side branches" where segments of the brain run off and do calculations that later turn out to have been mistaken (as in the run from 2102 to 2108 above). Would these side branches cause momentary bits of consciousness? Would these conscious threads then be erased when we do the rollback? And would we be aware of these effects if we were run on a Time Warp architecture?

In some sense we can argue that there would be no perceptible effects. Certainly no one would be able to say anything about it (we would wait for any late messages to arrive before actually doing any output, so we would never undo anything we started to say). So it would seem that there must be a conscious entity which is unaware of any effects caused by Time Warp.

On the other hand, maybe there are additional entities which have only a transient existence and which are constantly being snuffed out. The "main path" consciousness would not be aware of these, but they might be said to be real nonetheless. In effect, we could be creating and killing thousands of variants of ourselves every second.

I think this is one performance optimization which even some hard-nosed computationalists would hesitate to embrace. On the other hand, if it turns out to be more efficient, there may be pressure to use it. And after all, nobody who does ever reports anything untoward. It is another example of how philosophy will collide with practicality once these technologies become possible.

Hal