So, my voice output quote-of-the-day [wish I had one of these], says to me in its Clint Eastwood voice, "Are you feeling lucky?", To which I reply, "Not particularly, but I'm feeling pretty brave."
[snip, lots of stuff regarding mind types, definitions, etc.]
> A MORAL theory of mind (which seems to be what you're looking for) may be
> dimly perceived in this insight, as applied to the questions you've
> presented. As a first pass at formulating such a moral theory of mind,
> perhaps we can say that an entity should be treated as both a moral subject
> and a moral object TO THE EXTENT THAT it exhibits more or fewer of the
> various distinct elements of "mind". As an example, consider a book or
> computer hard drive as an instance of memory. Both are utterly passive
> repositories of information, incapable by themselves of processing any of the
> data they record.
Well, it depends what you mean by "incapable". A good disk drive today has a combination of hardware and firmware that handles ECC correction, optimizes seek activity into (using an elevator algorithm) and handles bad block remapping. These are not "passive" in my opinion. You might respond that a human designed that stuff into them. I agree. Nature "designs" a lot of stuff into all organisms so that they operate in an appropriate fashion for their environment.
> Likewise, consider your PC's CPU: It is capable of processing data in
> a very simple way, but without software running on it and access to memory,
> it is only POTENTIALLY capable of exhibiting one or a small range of the
> capabilities of "mind".
A not-to-distant future CPU chip will probably come with internal diagnostics that can detect and route around damaged elements.
> In the
In an abstract sense it seems that degree of "wrongness" of "burning"
or "wasting" has to do with the destruction of organized information
or using CPU cycles for non-useful purposes. I.e. contributing to
"entropy" is the basis for the offence. This seems to apply in the
abortion debate as well. I can destroy a relatively unorganized
potential human (< 3 months) but should not destroy a much more
organized human (> 6 months). This seems to be related to the
destruction of information content. It also holds up if you
look at when physicians are allowed to disconnect life-support
devices -- when it has been determined that the information
has "left" the system. Extropians may extend it further by
recognizing that incinerating or allowing a body to be consumed
is less morally acceptable than freezing it since we really
don't have the technology to know for sure whether the information
is really gone.
>
Ah, but you have to be careful if you require an external force to
implement the morality. If I personally, wanted to have the left
half of my brain cut out and subject myself to a round of hormone
treatments that caused extensive regrowth of stem cells, effectively
giving me a babies left-brain to go with my old-right brain, you could
argue that I would be doing something morally wrong by destroying
the information in my left-brain (actually I would probably have
it frozen so your argument might not have much weight). However, if
you go further and act to prevent me from having my operation, I
would argue that you are behaving wrongly. After all its my body damn it!
We seem to recognize this principle at least to some degree with the
concept of the "right to die". This isn't generally recognized in
our society but I suspect that most extropians (esp.
> proposed moral theory of mind, we do not consider these examples to be very
> significant moral objects or subjects; although, interestingly, some people
> DO consider them to be very slight moral objects, in the sense that there is
> a slight moral repugnance to the notion of burning books (no matter who owns
> them) or, as has been discussed here recently, "wasting CPU cycles".
> propositions of moral imperative. For instance, it would be morally wrong to
> reduce mental capacity in any instance, and the EXTENT of the wrong would be
> measured by the capacity of the mental system that is the object of the
> proposition.
> Thus, willfully burning a book would be bad, but not very bad,
Yep, the more information you are "destroying", the wronger it is.
But we (non-buddists) generally don't consider it over the line to step
on an ant and we certainly don't get upset over the ants eating plant
leaves whose cells are self-replicating information stores, but if
you destroy another person, or worse yet, lots of them, you have
crossed over the line.
> especially if there is a copy of the book that is not destroyed. It might be
> more wrong to kill an ant (depending on the contents of the book with which
> one was comparing it), but not as wrong as killing a cat or a bat.
>
Clearly in the example above, the mind is running (after all what good
is a backup copy if it isn't up-to-date)? Now, as an interesting aside
there is the question of whether "untouched" "Brhumps" (with exactly
the same inputs) will diverge from your brain and need to be edited
back into complete equivalence by the nanobots? Whether or not
that is necessary, the subject was created by me, for my use and
literally "is" me (even though it is a second instantiation).
> Morally, mind is a special sort of "thing". For one thing, it is a process.
> Thus, one might be said to have something more akin to "ownership" in the
> stored pattern of one's backup copies, but once they are "run" or "running",
> they would take on more of the quality of moral subjects as well as moral
> objects. Once a system is capable of being a moral subject, "ownership"
> ceases to be the right way to consider it as a moral object.
>
But military commanders will [unhappily] order their troops into situations
where they know some of those moral subjects *will* be killed. If I want
to use my "Brhump" in an experiment in which I know (and it knows) that
its eventually the cutting room floor for it, it seems justified if
I can come up with a good reason for it. I'm facing a really difficult
problem that simply *has* to be solved; 1 brain isn't enough for it;
create 2 brains have them work on seperate paths; select the brain
that comes up with the best solution and kill the other brain).
Thats what nature does and most of us don't don't see nature as
being "morally" wrong.
> Where does this conclusion come from? Simple: The Golden Rule. Minds are a
> special class of moral object BECAUSE they are also moral subjects. In other
> words, we have to treat minds differently from other moral objects because
> they are like "us", i.e. there is a logical reflexivity in contemplating a
> mind as a moral object.
>
But if I went into it *knowing* that the over-riding purpose of the
exercise was to continually and rapidly evolve a better mind, I've
implicitly signed the waiver (with myself) that part of me is
destined for dog food.
> > Q3: If you "edit" the backup copies when they are "inactive"
> > (so they feel no pain) and activate them are they new individuals
> > with their own free will or are they your "property" (i.e. slaves)?
>
> Without a doubt, they cannot be your slaves, whether you edit them or not.
> See the response to Q1 above.
> Even without considering the potential for moral-subject-status equality,
Well, historically we set about eliminating any animals that were a threat
and we breed animals to be relatively tolerant of our mistreatments.
The point would be that we *breed* animals and we *grow* plants and
to a large degree do with them what we like unless a group of us gets
together and convinces/passes a law/forces the others
> though, I believe the godlike SI is not completely without constraint
> in how it should treat such lesser beings, no more than we are in how
> we treat lesser animals.
While I believe that it would be nice if SIs didn't erase sub-SIs and let them have free CPU cycles, I don't think its morally wrong when the run out of resources to eliminate the least useful backups and experiments.
I believe that, ultimately, the same moral
> principles that guide such questions now CAN be "mapped" onto the harder
> questions you pose, Robert. But, as I suggest above, it will require that we
> become more clear than we have been formerly about just what a moral subject
> IS.
>
True, I think my claim is that moral subjects can only be judged
in relatively equivalent realities and capabilities. The further
separation there is between the realities and capabilities the
less moral injustice there is in the senior entity controlling the
subordinate [e.g. SIs vs. human uploads, humans vs. their own thoughts]
[snip re: physical realities vs. virtual realities]
>
But my mind may be killing off ideas or personalities or perhaps
imagining many horrific things and people don't generally get
upset about it. Virtual reality is real to the people in it
but to the people outside of it, it doesn't really exist. So SIs
presumably don't police the thoughts of (or the virtual realities
destroyed by) other SIs.
> This is only a difficult problem if we take a simplistic view of what
> "reality" is. "Reality" for questions of morality and ethics IS mind,
> "virtual reality" is in a very real sense more "real" than the underlying
> physical substrate. (Phew! That was hard to say . . .)
> Thus, the more
> "senior" the moral actor, the more justified we are in judging its actions
> (while at the same time those moral judgments become more difficult -- thus
> we eventually encounter an almost christian moral inequality in which it
> becomes nearly -- but not entirely -- impossible for "lesser", "created"
> beings to morally judge the actions of a much "greater", "creator" being).
>
I would have to agree here. If you are incapable of understanding the
context for what is occuring in your reality, you may be incapable
of judging the morality of *what* happens to that reality.
An enjoyable post, forced me to think alot.
Robert