UPL: Volition, Rights, and Uplifting

Technotranscendence (neptune@mars.superlink.net)
Fri, 10 Dec 1999 07:26:18 -0800

On Tuesday, December 07, 1999 8:37 AM Glen the very rushed Finney Delvieron@aol.com wrote:
> Ouch, ya got me there<g>. Yes, there is a whole spectrum in between. I
> suppose what I should have said is that octopods show signs of problem
> solving behavior, which goes beyond instinct and conditioning. In the
> classic food in a jar example, an octopus who has never seen a screw-top
jar
> figures out how to open it. The octopod has a goal (get the food), and is
> able to learn from trial and error in a relatively short time how to open
the
> jar using a novel behavior. Now, whether you think this represents
> volitional behavior or not may depend on your definition of volition. I
> would say that when something engages in a contingent, novel behavior it
is
> likely to be volitional, though the level of understanding may be low (but
> never-the-less present). I don't know if this would meet your criteria
for
> rationality, however.

It's not so much my criteria for this or that, but of developing any criteria at all. The famous problem-solving example -- getting the crab out of the jar -- seems like a good test, but I'd like to be a bit critical of it. I think I would want to use variations on that to see if the octopus could vary its strategies even more.

Even Glen does not provide a definition of volition above. He only tells us "when something engages in a contingent, novel behavior it is likely to be volitional..." And, yes, I believe "novel behavior" is a necessary condition for volition. Whether it's sufficient is another matter. After all, I could develop an algorithmn which just does "random" things when a problem isn't solved in a certain number of steps. Would anyone claim such an algorithmn has volition?

> I wonder, does conditioning also count as "programmed" behavior (granted,
not
> preprogrammed)? Just a thought. As for pinning down volition, it is
> difficult. I know that in patients recovering from severe brain injury,
we
> use a behavioral definition for conscious, aware behavior, basically
similar
> to the one I gave above, which is contingent, novel behavior. Often times
> the signal-to-noise ratio is rather bad when trying to figure this out in
> patients (random movements may mask the behavior, fluctuating level of
> consciousness may mean the patient's behavior is not always consistent,
> etc.), so we use statistical analysis to see if the patient responses are
> nonrandom.

I'm unfamiliar with this line of evidence. It presents all sorts of problems, especially those given by Robert Efron in his _The Decline and Fall of Hemispheric Specialization_. An analogy which is close to one of his: imagine removing all the RAM from your computer and not being able then to boot into your OS. You might, if you didn't know what RAM is, assume that RAM makes the OS boot. This is why I'm a bit leery of using brain damaged animals for comparisons. Even so, I'm sure Efron goes too far. After all, such damage does give one some idea of how things work, even if it can be misleading.

> >> I agree with the first statement. I gather the way to test
understanding
> is
> to present the organism with puzzles of the sort that it will want to
solve,
> such as mazes to get to food or mates.<<
>
> Yes, this is the way (see my examples above). However, how do you figure
out
> what and why it wants? I suppose by starting off with something hat it is
> known to want and start offering a choice between that and other items.

Of course! This is why I used food and mates!:P

> Wouldn't it be interesting to offer repeatedly a choice between a simple
> puzzle where there is food and a more complex one where there is not. If
the
> animal after a time started to show more interest in the more complex
puzzle
> when the food is clearly only in the redundant simple one, that may be an
> indication of pure curiosity in the test subject....and that might be the
> beginning of laying a foundation for justification for uplifting the
species.
> Another good indication might be if the animal practices behaviors it has
> learned even in the absence of rewards (perhaps a hint that the animal
might
> welcome improvement in its performance). True, this is reading a lot into
> these types of behaviors, but it is at least an attempt to understand life
> from that species' point of view and guage crudely how they might feel
about
> uplift.

That is a good idea! I read several years ago in Griffin's _Animal Thinking_ of an experiment where dolphins were train to exhibit novel tricks to get food. In other words, rather than repeating the same behavior to get a reward, they had to do something new to get it. Again, I can imagine this not being volitional behavior as with the algorithmn example above. However, it seems more like what we are looking for.

> >> I disagree. The definition appears too ambiguous. In any discussion
on
> rights, the first thing to ask is Why rights? Why not do without them?
The
> answer inside Objectivist and some libertarian and classical liberal
circles
> is that rights are the means of defining individual autonomy in a social
> sphere so as to allow freedom of action. For instance, my right to
property
> allows me to do what I want with my stuff regardless of what others
want --
> provided, of course, I don't use my property to violate their rights.<<
>
> I tend to divide rights crudely into two categories, freedoms and
> protections. The second one, protections, I tend to apply to more
> individuals than I do freedoms. For example, an infant has many
protections,
> but virtually no freedoms.

While the example Glen uses seems clear, his previous statement does not. I don't know why he would not apply freedoms to individuals. Most rights theories are, after all, about how free individuals are.:)

> In my way of thinking, protections apply to
> beings capable of feeling (and caring) about sensory input, whereas
freedoms
> require more understanding of the situation (and the ability to care about
> what they care about?<g>). Also, to my way of thinking, responsibility
goes
> hand in hand with freedom (but that's another subject). So, for the basic
> protection-type rights, I don't believe that rationality is necessary,
just
> subjectivity. I'm in a rush now, but will be happy to elaborate later.

I would like to see this. I think this is a bit of a muddle. I'd like to agree with Glen, but it would seem an infant's "right to be protected" (I assume to be fed, clothed, cared for, etc.) is really derivative on a lot of other things. In Lockean and Randian rights theories, no one can really be forced to provide those things -- except the parent or guardian. In fact, the infant's "right to be protected" is more akin to an implied contract -- you bring me into this world, you have to care for me for some time -- rather than a political right -- a sphere of action for an individual.

> >> Now this does not answer the question either. It merely defines
fuzzily
> what rights are for. Why would we need them? gets closer to the mark.
We
> need them because we need to live socially, materially, and long range,
and
> also since we are rational beings. (Dogs, too, are social and require
> material stuff to live, yet they've not reached the point of drafting a
> constitution and the like. Why? Because they are not rational -- at
least,
> not in the sense of a having a conceptual consciousness like ours.)<<
>
> Humans didn't have any written code of laws at one time (and were still
> rational in my opinion). I would guess that several species have
"cultural"
> rules of socialization which are learned from their family unit, but this
> doesn't necessarily indicate rationality.

I agree here, but Glen is taking my analogy too literally. My point is not that dogs need to have a code of written law, but that they probably can't even think in terms of law and responsibility that the most "primitive" humans can. I believe this is not because of a lack of writing or language, but because they are not rational. I think, for the most part, they are guided by instinct. Not to say humans are free of such, but they are on a different level all together than dogs.

> >>Does caring fall under this? I think it's easy for a being which is
> nonrational to care. Territoriality (caring about something like one's
> nest, food cache) and kinship/mate affection (caring for relatives and
> mates) seems well demonstrated in many animals.<<
>
> And I would say that we should respect these desires.

I've never advocated otherwise, but that does not mean I believe nonhumans have rights or that such respect equals rights.

> >> We could retreat to "reflective caring," but that does not help us,
since
> we need to know how to test for reflection. I submit that once we have
> reflection, caring or no, we will have sentience.<<
>
> I would argue that you still need to have caring, otherwise you just have
a
> knowledgable automaton. In my opinion, it takes more than being able to
> model yourself to achieve true sentience...you must also have a model of
what
> you want to be.

Perhaps, but I'll have to reflect on this some more.:) However, I do think all living things that are conscious have desires. All reflective living things are a subset of the former. Since having a desire can be parsed as caring about something -- e.g., caring about getting food or getting laid -- then what Glen says might be a truism. At least, in terms of uplifting, we are going to be dealing with creatures that have desires already. Perhaps in AI, we could create desireless but reflective minds. I'm not sure about that, but it seems unlikely in uplifting. At least, doing so in an uplift would seem pointless -- like creating clinically depressed people who stare at the ceiling all day. Not my cup of tea.:)

> >>Also, I submit that individuals have rights even when they don't
exercise
> their abilities. Thus, a guy who has the ability to be rational could
own
> property, be free to do as he pleases even though he is irrational --
> provided he does not violate anyone else's rights.<<
>
> I agree that the capacity is more important than the constant functioning
of
> that capacity, though I would argue that when someone is blatantly
> nonrational, there is a role for curtailing freedom in order to preserve
> protections for that person and others. It is where there is room for
doubt
> that I err on the side of freedom.

I would not curtail his or her freedom unless it could be show that he or she will violate rights. I.e., the guy who talks to invisible people on the bus is probably safe, but the one who randomly swings an axe in the mall is not.:)

> >> The species itself does not think. Members of it do.<<
>
> True enough, but there may be a genetic bias for how members of the
species
> would feel about uplift.

That would then not be their thinking about it, but their genetic bias about it, which would be nonvolitional.:) Anyway, this is too speculative. There might be a genetic bias to be vote for Pat Buchanan. Does Glen think that the human genome project will map the Buchanan gene in a few weeks?:)

> >> However, asking them beforehand is impossible -- unless they are
already
> sentient, in which case uplifting would be redundant.<<
>
> Does increasing the intelligence of already sentient beings then only
count
> as IA? In that case, I'm not sure it would be possible to truly uplift
the
> great apes (maybe not even dogs<g>).

Maybe then it would only be IA. Of course, there is some room for doubt in dogs' sentience.

> >>Asking them afterward doesn't matter, since we won't be able to undo the
> uplift and each on of them will be free to change his/her/its brain if
> he/she/it wants to. I'm not sure that the uplifting party has an
obligation
> to undo the uplift, though I would suspect not.
>
> Then I would suggest that the uplifter might be at least liable for
providing
> the means for reversing uplift, or if the desire to regress is considered
> pathologic, then providing appropriate treatment. Can't just leave your
> uplifts to fend for themselves until their on their feet, or tentacles, or
> paws, etc.

This has to be worked out. I don't think the uplifter would be responsible to undo the uplift if the uplifted being simply had a romantic view of its species former state. I would say, give the uplifted the technology and let them do what they will after that point. This is sort of the same as raising a child. One does not have the duty to make the adult that comes from that child revert to a childlike mental state later in life. Of course, this is a very loose analogy...

Cheers!

Daniel Ust
http://mars.superlink.net/neptune/