David Bradley (
Tue, 05 May 1998 23:49:01 -0400

This is my first real message to the list (ignoring that mail I thought I canceled). I have been a life long extropian, but had not followed organized extropianism, specifically, these transhuman and extropian lists, and the pages directly related to them, until somewhat recently.

I have only been following these lists for several months, and I am sure that you must have discussed these issues multiple times, but I have yet to become a direct observer to one. So now I will share my thoughts, doubts, and ideas on the subject for any who wish to respond. Any previous discussions I could search for in the archives would also be helpful (i.e. specific thread topics).

I have, however, seen references to certain aspects of uploading and other related topics in a number of discussions, and I am curious as to the popular opinion of the list about certain aspects of the process, given the current knowledge/speculation we have of it.

Since I have absolutely in no way had any first hand experimental experience with how consciousness works in the brain or how it may be uploaded, I have only been able to use logic throughout my years as a lone extropian to analyze the question. So, I have settled my mind on a few logical variables. The most important, to me, at least, is whether a consciousness or 'self' will be *passed* in the process, as opposed to a *new* one just being added. Since, even though I like the idea of my memories not being wasted, I'd much rather be there to utilize them myself.

So, if you will, allow me to analyze a few methods I've seen, or thought of myself
(not necessarily before anyone else did though,) for uploading.

All of these assume that 1) consciousness exists 2) the brain is the carrier/supplier of consciousness 3) the brain does so through chemical and/or electrical means 4) we have the ability to somehow manipulate this process to our benefit.

If I am wrong at all in my reasoning, please correct me. That is the first reason that doing this, to further my knowledge. Other than that I am curious as to how others logically back up the processes I see as improbable or impossible with my current information.

Let me define up front what I will be *using* the term "original consciousness" or "o.c." for. First, I am not at all implying that this is, or should be the accepted definition of "consciousness," 'original' or otherwise. This is just *my* way of saying "The Mind's 'I'" as it is popularly called. The part of the mind that 'hears' the songs 'in my head.' The part that can recite poetry in an unspoken voice. The part that people usually consider 'gone' when death comes (whether to another plane of existence, to nothingness, or otherwise is irrelevant.) The part of the mind that enjoys things. The part that is probably wishing that I'd just finish this definition and continue the document. :) The "original" in the term refers to the consciousness we all have right now (assuming none of us has already been uploaded?) A second or 'n'th consciousness would be for anytime one can be called my most 'conscious' after the first is lost either by death or purposeful upload.

'Total consciousness' is same as above, except stands for all consciousness-es a person will have, summed together. In other words, every time they awake and think, it is still a part of the same total consciousness, even if the first "I"(or original consciousness) died with the original body.

(Are there more accepted terms for these two states?)

Also, for a few examples, I am comparing multiple copies of people, (i.e. gene, memory, personality, and other aspects important to being 'them'). If two people have differing memories after a specified point in time (i.e. when the copy(ies) is/are made) that is irrelevant to my comparison (except for a small part that is irrelevant itself.) If two bodies share the same original consciousness, they are for the purposes of this discussion, the 'same.' For example: My original consciousness is gone when my total consciousness was uploaded into a new body. That new thing is not the 'same' as me, since "I" am now dead, even though it has my personality, memories, etc. "I" am no longer in control of it, or aware of it's presence. However, if "I" can control it, and am aware of it, it is still me.

With this in mind, the questions of the 'o.c.' on it's transhuman journey now begin:

First, physical backup, using either true cells or mechanical devices of some kind: *If the consciousness is both contained in, and produced by the cells of the brain*
(and is the reason you die if you do not get enough oxygen to the brain) one sould
be able to *logically* assume that:

  1. If the cells are all taken away and the new system is implanted, that the original consciousness *is* lost. There should be no more reason for the original consciousness to exist than there would be for a person 'a' to still be living in an apartment after 'a' has been replaced by person 'b'. The apartment may still be a home (or the body may still have a consciousness,) but there is a new 'tenant.'
  2. If the cells are replaced one by one (or slice by slice) the o.c. *may be* lost. Either the individual must sit there and slowly die, or the individual will sit there and notice nothing (perhaps some odd, temporary effects, but nothing serious). (Just for completeness, a last possibility would be that the process would interfere too much with the functioning of the brain to be a viable form of uploading, but that contradict assumption 4) If the former is true, it should hopefully be found in the first the process is perform on. If he says, 'woah, I remember dying!' then that's probably a hint. :) That is, unless the whole this is a blur, and no one finds out that they die until they try it... Sounds like a bad sci-fi plot. :) The latter way should work. To expound a bit: The only way that this would not work is if consciousness could not expand to fit all the cells in a brain, which seems unreasonable, especially in light of studies such as the one described at this URL: I am sure that other type of studies have been made in this area that would support this assumption. The only possible hurdle would be if genetically enhanced or mechanical cells were used, they would be ‘rejected.' However, there seems to be no logical reason (I am not knowledgeable in biology enough to have any sort of authority on the subject, though) why the new cells, or more easily, nanomachines couldn't, through certain chemical/ electrical functions ‘fool' the original cells into thinking that they are normal, correct? Still, I suppose that since the body isn't being replaced in this case that it is a temporary solution. So let's move on to more realistic solutions.
  3. If the consciousness is uploaded into a computer before it is uploaded into a totally new body. This would not work, using the specific assumption of this section. If the consciousness arises because of the actions of the brain's cells, and is then contained within, there is no reason why it should live in a computer. A computer is, after all, just a processor of information, at least until some drastic design change is made. Of course, after that it would no longer be a ‘compute'r in my, and I assume many other's definition. Anyway, anything an electronic ‘compute'-er can do, a human ‘compute'-er also can, with a good old sheet of paper (or a number of sheets), just much, much slower. Now, if consciousness can be understood to the point of it being used in a manner such as uploading, it should be able to be computed. Even if not, let us say for the sake of argument that it one day becomes so. The man and the computer would both work out the same results of stimuli as per that specific consciousness's personality and other factors. So, the consciousness would appear to exist, but would not necessarily exist. Perhaps this could become an addition to Nietzche's "Baum der Erkenntnis?" ;)
  4. Skipping the middle man, and using the method put forth in 2. If, for our example a mechanized brain will be used, an enhanced brain were connected to the existing one, could the consciousness ‘transfer' to the new brain as if it were a part of the old, even if only a part at a time? For example: the new brain only needed wires to connect it to the new body. Nanotech machines are released along the wire and build the new brain up from the wires, (which are inserted rather unobtrusively into the true brain) replacing the original, through the cranium until it fully takes over.
  5. Now, if 4 could happen, it's not stretching the concept too much to say that one wire (‘a') could release machines of one device while the other side (‘b') wire was constructing for another device. When they meet in the middle, both are removed, and ‘a's configuration was added to ‘b's, and vice versa. Hopefully the nanomachines would be able to send this sort of information (configuration, chemical behavior, etc.) to a computer or other storage medium also for backup, but immediately, for the other side to use in place of the original cells to complete it's brain.

All said and done, two perfect copies would be made, and assuming that all other assumptions made along the way were true, both should have the original consciousness. How this could actually work is another variable matter... Both are new consciousness (seems unlikely), one is the original consciousness, and one is new (but what would decide? Also seems unlikely), the consciousness could not just handle the process and would behave unpredictably when it tried to expand to cover both brains(covered later), or the solution I find most logical (even if not very natural seeming): That the original consciousness would control both brains. If the consciousness would expand to include the machines as they took over, the process should continue until both devices meet in the corpus collosum, where the o.c. is still controlling the brain as a whole, even though it is but two devices communicating to each other just as the lobes normally would. Here a computer backs up the information of both sides. The lobes are then separated (each respective lobe could be fed dumbbed-down reactions that the computer is creating from the other device's backup, or even from the other device itself, with wires connecting each other, and an area of nanomachines used as ‘connections' to the lobe, passing the signals just as they normally would be seen. Both brains could then be constructed from the device halves, consciousness filling in as brain progresses in completeness.

About the conscious controlling two disconnected areas, I assume that at least some of you are familiar with Roger Sperry's experiments with the slicing of patients' corpus collosa. If not, what happened was (don't quote me): patients who suffered from extreme seizures had their corpus collum, the connection between the lobes of the brain, severed, in the hope that the lobe which fired during the seizure would not affect the entire brain. He had great success with the project, and much was learned about how the individual lobes processed information. However, I remember a special I saw on TLC or The Discovery Channel, in which patients who underwent similar (or the same) procedures would exhibit conscious seeming behavior out of the person's control. (A part that really sticks in my mind is when they asked a woman who had the treatment to point to ‘yes' or ‘no' on a large card, in answer to questions. In some questions, no matter how sure ‘she' was, one hand would point to ‘no,' even going as far as pushing the other hand out of the way, and quite consciously doing as much as possible to point to ‘no,' even covering up the ‘yes.' Again, don't quote me on this, but that is what I saw.) Perhaps other forces were at work, the cut was not preformed properly or a number of other things, but this worries me a bit. I believe this had to do again with the seizures, but I'm not sure. This was an area I was hoping someone more information in. Any way, the corpus collosum isn't really the sole connection which holds the brain together, correct?
(I'm not much of a brain guy, I do computers.) So it's not necessarily a
true split. Still, I see nothing that prevents this from happening so far, and we are still operating under the assumption that the cells themselves are the creators and holders of consciousness. Something tells me that a reply will have already proved me wrong by now, but I will press on none the less...

This section deals with an area unrelated to the task at hand, and explores some variations that could possibly be made if the above is possible. The chain of reason will continue again soon (it will be commented as such.) Now, if we can fool brain cells into thinking that a robot mock-up is ‘one of them,' then we could possibly have odd changes in the design of this receiver. Perhaps we could change around some x and y chromosomes and create an opposite gender version of oneself... I don't know what kind of laws written or unwritten will be around for thing type of thing, but you know someone is going to try it. The o.c. controlling both a female and male version of an individual would be.. odd, to say the least. How would any offspring be setup? I would assume there would be the problems that inbreeding would cause, or would it ‘transcend' even that? That's a topic for thought. Either way, something tells me that experiencing the act from both sides at once would be ostracized socially, as masturbation is now, though. Besides that, it's also adding the taboo of incest.

Off of that subject though, is a serious topic of debate. If genetic rearrangement, ala the previous example, is possible, would it be permissible to augment one's appearance in an upload? Most people here would probably say yes. Some would probably say that is required, at least if one wanted a certain level of advancement (Two legs? You must be kidding!) Consider also, though, that if this type of multiple consciousness is possible, then someone could also have multiple personalities. This is not good. Spies, hired killers, and a plethora of other such positions would be made much easier by just splitting your consciousness into a never-before-seen body, through genetic engineering, or any other way of creating bodies different from the original. This new body could then kill all he chose to, and even if he died, it would still leave the other portion(s) of the o.c., which can still resplit. The tech will probably be set up so that the typical hired killer cannot have the tech to just ‘split' at will, but this can still be easily misused. Also, it should not be banned entirely. If I cannot chose a body for upload, I should still have the option to try out both and get a feel for each, though this could perhaps happen by just trying one, then the other. Besides that, I would hate to have to have to deny everyone the ability to experience two lives at once. Could you imagine that? That would be amazing. Though I suppose that arranging the memories again would be a hard enough ordeal to just leave it as an ‘interesting thing,' and there all the memories would be lost, so what's the point... I suppose that if regulations would be made, then it would just be easier to disallow it. Total banning may not be necessary, though. Think about it... If someone is caught after a murder or killed after a murder, you know with pretty much absolute certainty who did it. Their memories would show who they were, since they'd still remember their original life before upload. If their memories were not uploaded (again, as shown in Roger Sperry work, if the consciousness can't even access stuff with great certainty from the other lobe of a possibly still partially connected brain, then it certainly won't be able to from one being to the next.), then the killer body would be nothing more than a personality clone, and the o.c. would probably experience much mental turmoil, being possibly 120 years old in one brain and literally newborn in the other. If they could somehow manage with total amnesia in one of their bodies, I believe that body wouldn't be enough of ‘the original' to have an important connection, it would logically seem to cause a true multiple personality disorder. That leads into another interesting area. Since the memories of any body would slowly differ from the other over time, wouldn't the same process eventually happen? The older an individual is, the less any memory changes what they have believed since age 10 or so. I.e. between birth and age 10, every day could possibly trigger a totally new way of thinking of the world, but after age 15 or so, almost nothing will make you believe that a certain race really *isn't* inferior to you, or that, if you think about it, there's really nothing *wrong* with wanting to live forever. However, there is the possibility. Many people here probably had such an experience with Newtonian physics when relativity was introduced. It probably caused you to literally think totally differently about your world and how you perceive it. So, if one of my bodies was on a planet around Proxima Centauri, and the other on good old earth, and the one on P.C. talked to a real life extraterrestrial, wouldn't that cause a severe clash in my mind's rendering of my bodies? The earth bound body would have no stored memory of the existence of those aliens, and therefore would not believe in them, when "I", the o.c., knows from my P.C.'s body that they exist. The difference would be even more drastic for someone such as our killer who didn't upload his memories. The new body might learn a lot about ethics as it gains the knowledge to survive, such as realizing that killing causes things to not survive, which is the most important thing to this body now. The new body might become *opposed* to killing!

Although, I don't have to experience everything to know it. If I could somehow think in a certain brain at will, I could get those memories in there. Then again, how would either brain ever be able to do that? I suppose some more research on memory would be helpful. I suppose it depends on how consciousness works. If I can realize something in one body, and rethink that in another, I can have the memory in both. This could also be a sort of collective consciousness if more than one consciousness could inhabit a brain (as opposed to the other way around as I've been discussing.)

Now, back to reasoning the mind out of the brain... If the consciousness can be coaxed out of the brain into an electronic device made of nanomachines, then, technically instead of hooking them to a body, why not hook them to a computer? It works now, since there is a place for the reactions and electric functions to happen. Just all the stimulus is coming from computations in the computer... Here's the chance for artificial reality.

So, if these nanomachine cells can emulate the original cells, then shouldn't that mean that consciousness isn't technically in the chemicals? Yes, this is pure speculation here, but so is the second of our assumptions, that the cells are the cause/holder of consciousness, actually all four are.

So, if the chemicals don't necessarily make the consciousness, then it's probably not the electricity either, since the nanomachines should be able to be started again with no problems. If it is the original spark that is the consciousness, then everything stops here. The machines are turned off, but when they return, it is a new consciousness, just another in t.c.., not the original consciousness. Now, if I remember right, there have been people that have been ‘brain dead' and come back, with the help of shocks and things. Does ‘brain dead' entail all action in the brain ceasing? Does the natural electricity change at ‘brain death?' These answers could help solve this part.

If all the above is true (working on four original assumptions (which aren't IMO stretching it) and adding from there), then I have just proven the original second assumption wrong. (This is a mighty *big* if, though, and I'm almost sure that I am wrong.) However, if so, then there is nothing making me "me." I'm not sure if that's a soul or what. Think about this, if it is just the setup of my brain, when given a spark
(or if it is naturally there from the mother) that causes ‘me' to be, and
if that *can be changed beforehand* by nanomachines that ‘fool' the original cells into giving them the consciousness, what is stopping my consciousness from forming in your brain?

I think it all breaks down in the question of ‘individual' spark. Logically, from the information presented (which, unfortunately is not all) that seems the only place where trouble could strike. I really need more info on the definition of brain death.

Well, I hope I get some replies (at least it will let you get your minds off of gun control here for a minute or two.) Maybe I even touched on something no one had ever thought of before. Maybe I blew the whole "consciousness is brain reactions" thing out of the water. Yeah, right.

Someone, please prove me wrong!!!

So, then, could random matter in the universe create another part of my consciousness? Dave Bradley