Anders Sandberg writes:
> Running an uploaded mind over a net is trickier; as anybody trying
> to program distributed applications know, errors occur all the time,
> different parts may run att different speeds and so on. But it is
> also more resilent, you don't need to have a lot of resources in
> one place and it is probably cheaper than having a big computer
> somewhere. So in time uploads might be able to run on the net,
> like the Copies in Greg Egans seminal _Permutation City_ (IMHO
> the Big Uploading Novel to date).
This brings to mind an interesting thought. Years ago I worked in the
parallel processing business doing operating system design. The advantage
of parallel processing is of course that you have a lot of processers working
on the problem. The disadvantage is that sometimes processors have to
wait for work to be communicated from other processors before they can
proceed. With certain kinds of simulations, you may not necessarily know
whether information is going to come from the other processors or not in
a time frame which will influence what this processor is doing. The usual
solution is to establish a global clock and make all processors run
synchronously, so that they don't go on to time t+1 until everybody has
done all the work for time t.
There was a research project on the "Time Warp OS". In this system, each
processor plowed ahead without waiting to see whether other processors
were going to send messages. Basically it assumed that there were not
going to be messages from the other processors, and it did the local
simulation on that basis. If a message then arrived which should have
been processed in the past, the local OS rolled the processor state
back to what it had been at the time that message needed to get handled.
It then handled the message and proceeded forward. Most of the technical
work in the OS was to have efficient ways to checkpoint and roll back
in this way.
You could envision a neural simulation on a Time Warp machine. It would
assume some default pattern of incoming impulses into the part of the
brain being simulated by some processor. Then, if the actual data which
arrived was different from what had been assumed, it would roll back its
state and handle the data as needed. If it turned out that the default
model for incoming data was reasonable accurate much of the time, this
could be a very efficient way to simulate the brain on a network.
The interesting question this raises is whether there would be any
conscious perception by the simulated brain of the "paths not taken".
Frequently, part of the brain moves forward in time assuming some
pattern of outside impulses, only to find out that the actual pattern
was different from what was assumed, so that it resets its state to what
it was in the past and proceeds forward to process the actual data. So
there are transient brain states being computed which do not correspond
to the actual brain states which the simulation eventually produces.
We assume that the (simulated) neural activity is what gives rise to
consciousness. Therefore it might be that the simulated "wrong" neural
activity could give rise to a consciousness which does not match the
"true" consciousness being determined by the simulation. This false
consciousness would have no way of interacting with the outside world, or
of revealing its existence through any overt behavior. The person being
simulated would steadfastly deny that he had any unusual perceptions
or sensations due to the time warp simulation (because all the false
ones would get rolled back and erased). Yet we have reason to think
that he is wrong, by the principle which identifies mental states with
(simulated) brain states.
This could be thought of as a case where behavior cannot reveal certain
mental attributes.
Hal