Re: ETHICS: Mind abuse & technology development [was RE: New Government?]

Robert J. Bradbury (bradbury@www.aeiveos.com)
Sat, 2 Oct 1999 14:02:41 -0700 (PDT)

So, my voice output quote-of-the-day [wish I had one of these], says to me in its Clint Eastwood voice, "Are you feeling lucky?", To which I reply, "Not particularly, but I'm feeling pretty brave."

So, after a month, I'm going to trip over the ropes back into the ring with ObiWan Burch regarding mind ethics.

> On Sun, 29 Aug 1999 GBurch1@aol.com wrote:

> In reponse to my message message dated 99-08-27 07:21:42 EDT
>

[snip, lots of stuff regarding mind types, definitions, etc.]

> A MORAL theory of mind (which seems to be what you're looking for) may be
> dimly perceived in this insight, as applied to the questions you've
> presented. As a first pass at formulating such a moral theory of mind,
> perhaps we can say that an entity should be treated as both a moral subject
> and a moral object TO THE EXTENT THAT it exhibits more or fewer of the
> various distinct elements of "mind". As an example, consider a book or
> computer hard drive as an instance of memory. Both are utterly passive
> repositories of information, incapable by themselves of processing
any of the
> data they record.

Well, it depends what you mean by "incapable". A good disk drive today has a combination of hardware and firmware that handles ECC correction, optimizes seek activity into (using an elevator algorithm) and handles bad block remapping. These are not "passive" in my opinion. You might respond that a human designed that stuff into them. I agree. Nature "designs" a lot of stuff into all organisms so that they operate in an appropriate fashion for their environment.

> Thus, a book or a current computer storage device exhibits
> only one aspect of mind (and that only dimly).

Maybe, but the complexity is increasing rapidly. The next step would be anticipating what blocks you wanted based on historical trends and retreiving them before you request them. This is an attribute of a very efficient secretary.

> Likewise, consider your PC's CPU: It is capable of processing data in
> a very simple way, but without software running on it and access to memory,
> it is only POTENTIALLY capable of exhibiting one or a small range of the
> capabilities of "mind".

A not-to-distant future CPU chip will probably come with internal diagnostics that can detect and route around damaged elements.

> In the
> proposed moral theory of mind, we do not consider these examples to be very
> significant moral objects or subjects; although, interestingly, some people
> DO consider them to be very slight moral objects, in the sense
that there is
> a slight moral repugnance to the notion of burning books (no matter who owns
> them) or, as has been discussed here recently, "wasting CPU cycles".

In an abstract sense it seems that degree of "wrongness" of "burning" or "wasting" has to do with the destruction of organized information or using CPU cycles for non-useful purposes. I.e. contributing to "entropy" is the basis for the offence. This seems to apply in the abortion debate as well. I can destroy a relatively unorganized potential human (< 3 months) but should not destroy a much more organized human (> 6 months). This seems to be related to the destruction of information content. It also holds up if you look at when physicians are allowed to disconnect life-support devices -- when it has been determined that the information has "left" the system. Extropians may extend it further by recognizing that incinerating or allowing a body to be consumed is less morally acceptable than freezing it since we really don't have the technology to know for sure whether the information is really gone.

>
> From the proposed axiom of "mind morality", one could derive specific
> propositions of moral imperative. For instance, it would
be morally wrong to
> reduce mental capacity in any instance, and the EXTENT of the wrong would be
> measured by the capacity of the mental system that is the object of the
> proposition.

Ah, but you have to be careful if you require an external force to implement the morality. If I personally, wanted to have the left half of my brain cut out and subject myself to a round of hormone treatments that caused extensive regrowth of stem cells, effectively giving me a babies left-brain to go with my old-right brain, you could argue that I would be doing something morally wrong by destroying the information in my left-brain (actually I would probably have it frozen so your argument might not have much weight). However, if you go further and act to prevent me from having my operation, I would argue that you are behaving wrongly. After all its my body damn it! We seem to recognize this principle at least to some degree with the concept of the "right to die". This isn't generally recognized in our society but I suspect that most extropians (esp. the stronger libertarians) would argue this.

> Thus, willfully burning a book would be bad, but not very bad,
> especially if there is a copy of the book that is not destroyed.
It might be
> more wrong to kill an ant (depending on the contents of the book with which
> one was comparing it), but not as wrong as killing a cat or a bat.

Yep, the more information you are "destroying", the wronger it is. But we (non-buddists) generally don't consider it over the line to step on an ant and we certainly don't get upset over the ants eating plant leaves whose cells are self-replicating information stores, but if you destroy another person, or worse yet, lots of them, you have crossed over the line.

>
> > Q1: Do you "own" the backup copies?
> > (After all, you paid for the process (or did it yourself) and
> > it is presumably on hardware that is your property.)
>
> I don't think you "own" your backup copies any more than you "own" your
> children or other dependants.

But I do "own" my left brain and my right brain, my left hand and right hand and all of the cells in my body. Now, in theory I can engineer some cells to grow me a hump on my back of unpatterned neurons and then send in some nanobots to read my existing synaptic connections and rewire the synapses my hump neurons to be virtually identical. The nanobots also route a copy of my sensory inputs to my "Brhump" but make the motor outputs no-ops (after all its a backup copy). Are you suggesting that I don't "own" my hump?

> In the case of your children, they are built from the "joint
> intellectual property" of you and your mate and all of the
> atoms making up their bodies started off as your property, but still we don't
> say you "own" your children. Why? Because they are complex minds.

So is my "Brhump". But unlike the example of children, I created it entirely and it is a parasite on my body.

> Now, you may have special rights and duties with regard to minds that
> have such a special relationship to you, but "ownership" isn't
among them.

Well, since it is a copy, in theory cutting it off and burning it falls into the "book burning" moral category. Since the brain can't feel pain there are no "ugly" moral claims that this would be the equivalent of torturing or murdering someone.

>
> Morally, mind is a special sort of "thing". For one thing, it is a process.
> Thus, one might be said to have something more akin to "ownership"
in the
> stored pattern of one's backup copies, but once they are "run" or "running",
> they would take on more of the quality of moral subjects as well as moral
> objects. Once a system is capable of being a moral subject, "ownership"
> ceases to be the right way to consider it as a moral object.

Clearly in the example above, the mind is running (after all what good is a backup copy if it isn't up-to-date)? Now, as an interesting aside there is the question of whether "untouched" "Brhumps" (with exactly the same inputs) will diverge from your brain and need to be edited back into complete equivalence by the nanobots? Whether or not that is necessary, the subject was created by me, for my use and literally "is" me (even though it is a second instantiation).

Now, you might say that if I give my 2nd instantiation separate senses and let it evolve a bit that it now becomes its own unique moral subject. But what if it knew going into the game (since its a copy of me) that I have a policy of incinerating "Brhumps" at the end of every month? [Not dissimilar from the knowledge in our current world that we don't live to 150 -- its just the way it is.]

This comes up a little in Permutation City and to a much greater degree in the Saga of The Cuckoo (where people go climbing into tachyon teleportation chambers, knowing that the duplicate on the receiving end faces almost certain death). Its kind of a going off to war mentality, knowing that you might not come back, but you do it anyway if you feel the need is great enough.

>
> Where does this conclusion come from? Simple: The Golden Rule. Minds are a
> special class of moral object BECAUSE they are also moral subjects. In other
> words, we have to treat minds differently from other moral
objects because
> they are like "us", i.e. there is a logical reflexivity in contemplating a
> mind as a moral object.

But military commanders will [unhappily] order their troops into situations where they know some of those moral subjects *will* be killed. If I want to use my "Brhump" in an experiment in which I know (and it knows) that its eventually the cutting room floor for it, it seems justified if I can come up with a good reason for it. I'm facing a really difficult problem that simply *has* to be solved; 1 brain isn't enough for it; create 2 brains have them work on seperate paths; select the brain that comes up with the best solution and kill the other brain). Thats what nature does and most of us don't don't see nature as being "morally" wrong.

>
> > Q2: Do you have a right to "edit" the backup copies?
[snip]

> On the other hand, it is not only acceptable, but good to do our best
> to act to "program" or "edit" the child's mind to make it a BETTER mind.
> Thus, the proposed morality of mind would find that some "editing" of
> one's own backup copies would be good, and some bad.

It seems like editing (or even selecting brains), if done from the perspective of self-improvement would be morally acceptable if the information gained is more than the information lost. Destroying a brain that is mostly a copy that fails the "best solution" test probably isn't much of a loss.

>
> > Q3: If you "edit" the backup copies when they are "inactive"
> > (so they feel no pain) and activate them are they new individuals
> > with their own free will or are they your "property" (i.e.
slaves)?
>
> Without a doubt, they cannot be your slaves, whether you edit them or not.
> See the response to Q1 above.

But if I went into it *knowing* that the over-riding purpose of the exercise was to continually and rapidly evolve a better mind, I've implicitly signed the waiver (with myself) that part of me is destined for dog food.

>
> > Q4: & Greg's Extropian Exthics and the Extrosattva essay URL [snip]
>
> ... which is basically an extension of the Golden Rule into the arena of
> unequal minds with the potential of augmentation, i.e. a godlike being
> should treat lesser beings as much like itself morally as possible,
> because those beings may one day themselves be godlike themselves.

The key aspect is that you are discussing entities with the "potential for augmentation". There are two problems, a scale problem and an allowed reality problem.

The scale problem has to do with the perceived degree of difficulty of the sub-SI becoming an SI. In theory we will soon have technologies that would let us provide ants (with perhaps slightly larger heads) human equivalent intelligence. Thus ants are "potentially augmentable". You don't see us treating ants with the "golden rule" generally.

The allowed reality problem has to do with the fact that the SI "owns" the processors, presumably anything going on within them it is conscious of and completely controls. If the sub-SI doesn't have the interface hooks into the SI overlord then there is no way it can climb up the evolutionary ladder. Its like arguing that a neanderthal can build a nuclear missle or or an ant can turn itself into a human. These things aren't physically impossible, but the means to do what is required are unavailable to the subject. A sub-SI may be in a inescapable virtual reality in which it is a moral subject, but the overlord owns that virtual reality and to the SI, the sub is nothing but information organized to answer a question. When the question is answered, you are of no further use, so you get erased.

> Even without considering the potential for moral-subject-status equality,
> though, I believe the godlike SI is not completely without constraint
> in how it should treat such lesser beings, no more than we are in how
> we treat lesser animals.

Well, historically we set about eliminating any animals that were a threat and we breed animals to be relatively tolerant of our mistreatments. The point would be that we *breed* animals and we *grow* plants and to a large degree do with them what we like unless a group of us gets together and convinces/passes a law/forces the others that doing that is morally wrong and they have to stop. SIs policing other SIs seems impossible with regard to the internal contents of what is going on within the SI. I can stop you from killing someone but I can't force you to stop thinking about killing someone.

>
> The morality of mind proposed here would dictate that the subprocess you
> posit should be treated with the respect due a fairly complex mind, even if
> that complexity is far less than that of the SI.

You haven't convinced me. If the scale differences are too large, then the sub-SI is dogmeat. Your example regarding the "easy life" for subjects that graudate from lab animal school works in cases where the levels are relatively similar (humans & chimps) but fails when the scale is larger (humans & nemetodes).

>
> Hal's put his finger on the fact that we're not treading entirely virgin
> moral territory here. We already have to deal with moral questions inherent
> in interactions of unequal minds and in one person having some kind of moral
> "dominion" over another.

I think the hair-splitting comes down to whether it the subject is externally independent (a child), or internally self-contained (an idea in my brain). From my frame of reference my killing my "Brhump" or an SI erasing a sub-SI has the same moral aspect as occurs when one personality of an individual with multiple personality disorder takes over the mind (perhaps permanently killing/erasing) one or more of the other personalities. I those situations, I don't think the psychologists get very wrapped up in the morality of killing off individual personalities. They judge their approach on what is feasible (can I integrate or must I eliminate?) and what is ultimately best for the survival and happiness of the "overlord".

While I believe that it would be nice if SIs didn't erase sub-SIs and let them have free CPU cycles, I don't think its morally wrong when the run out of resources to eliminate the least useful backups and experiments.

I believe that, ultimately, the same moral
> principles that guide such questions now CAN be "mapped" onto the harder
> questions you pose, Robert. But, as I suggest above, it will require that we
> become more clear than we have been formerly about just what a moral subject
> IS.
>

True, I think my claim is that moral subjects can only be judged in relatively equivalent realities and capabilities. The further separation there is between the realities and capabilities the less moral injustice there is in the senior entity controlling the subordinate [e.g. SIs vs. human uploads, humans vs. their own thoughts]

[snip re: physical realities vs. virtual realities]

>
> This is only a difficult problem if we take a simplistic view of what
> "reality" is. "Reality" for questions of morality and ethics IS mind,
so
> "virtual reality" is in a very real sense more "real" than the underlying
> physical substrate. (Phew! That was hard to say . . .)

But my mind may be killing off ideas or personalities or perhaps imagining many horrific things and people don't generally get upset about it. Virtual reality is real to the people in it but to the people outside of it, it doesn't really exist. So SIs presumably don't police the thoughts of (or the virtual realities destroyed by) other SIs.

>
> [snip]
> > ... (a) the relative "seniority" (?) of the actor as compared
> > with the "person" being acted upon; or (b) the physical
> > reality of the actions.
>
> I maintain that the latter factor is all but irrelevant to the moral
> questions posed here

I'm not convinced, I think the perception of the actions at the reality level is the critical basis for morality. Humans judge each others morality by what they do, not what they think. SIs judge other SIs by whether they destroy potential SI civilizations (maybe) but not whether they are erasing virtual information in their own memory banks.

> Thus, the more
> "senior" the moral actor, the more justified we are in judging its actions
> (while at the same time those moral judgments become more difficult -- thus
> we eventually encounter an almost christian moral inequality in which it
> becomes nearly -- but not entirely -- impossible for "lesser", "created"
> beings to morally judge the actions of a much "greater", "creator"
being).
>

I would have to agree here. If you are incapable of understanding the context for what is occuring in your reality, you may be incapable of judging the morality of *what* happens to that reality.

An enjoyable post, forced me to think alot.

Robert