From: Brett Paatsch (bpaatsch@bigpond.net.au)
Date: Tue Aug 05 2003 - 12:22:14 MDT
Paul Grant <shade999@optonline.net> writes:
> <brett> There are some classes of pre-emptive
> action made on the basis of genuinely held, earnestly
> reasoned (note I am not touching *belief* here) views
> that would require action in my view.
> <me> trying to justify a pre-emptive measure on the
> notion that is "genuinely held" or "earnestly reasoned"
> is a rationalization in my opinion, generally to excuse
> the type of behavior you are engaging in...
By including the word 'generally' above aren't you in
fact conceding my point? I.E. in *some* specific
circumstances pre-emptive action *is* morally justified?
> the limit on this line of reasoning though, is in the
> duration of the act
> .... for instance, say you were prescient, and saw a man
> who was going to mug you (with a knife) 10 minutes
> from now, and hit him over a head; then you would be
> acting morally (given ur prescience). Lets say you are
> not prescient, and you hit him over the head on the
> possibility that he might mug you; than you are acting
> immorally.
In the real world, where our moral judgement is supposed
to assist us, (or at least that is my contention) we are
*never* fully prescient and so there is always *some*
chance the suspected or likely mugger may not in fact
mug us. Assuming one values oneself, how can we do
otherwise than weigh up the chances as best we can?
My answer - we can't. Therefore the point becomes
how best we can.
At this point I think its worth distinguishing between
a moral code, which may be a preconsidered
framework that one uses to help reach a particular
moral judgement and moral judgements per se.
There are *no* moral codes that provide definitive
answers to all the moral dilemmas that arise just as
there are no maps on a scale of 1:1, therefore whenever
a particular moral judgement is required there is no
dodging that the subjective individual must make it
which or without the benefit of a more or less
sophisticated moral code.
> Lets say you are not prescient, and he is mugging
> someone else (as it is apparent to you from your
> vantage point), and you intervene by hitting him
> over the head... Then you're actions may or may
> not be immoral, on the basis that he may not be
> the one doing the mugging, but rather, may be the
> muggee.
Actually I'd say in the circumstances you describe
the person *has* acted morally, but with poor
judgement, so poor in fact that they may be found
to have acted illegally.
> The point being that you have to consider the
> granularity of the event, the knowledge
> you had as an autonomous agent, the environment
> you're in, and the action chosen, and the outcome of
> that action...
Sure. But the "you" in this case is a subjective individual
using their own judgement, when such judgement may
or may not be particularly good. So it also behooves us to
consider the granularity of the *moral code* that is taken
by many of us into situations where it can guide particular
moral judgements.
If one is running around in 2003 holding that the 10
commandments are all the moral code that is needed
one is going to come up against some particularly
curly challenges in interpreting how to operationalise
the directive that though shalt not kill. Even this simple
edict is subject to interpretation. Life in 2003 is known
to take place on more than the organismic level. Cells
are alive. And human cancer cells are human life.
Clearly it is absurb to argue that a cancer cell or a
multiples of them are of moral weight with a person
dying of cancer. Yet this is not much of an exaggeration
beyond the proposition that say all embryos are a form of
human life when by human life what is obviously meant
is personhood.
Before laws can be set that codify legally what may and
may not be done it is prudent to have a moral disucssion
where the words we use do not obfuscate the reals
issues at hand. Issues such as how does a civil society
weight the rights or potential persons (embryos, fetus etc
at different stages). When we do not decide or address
these questions public policy continues to be made on
the basis of outdated moral and legal codes. And persons
suffer needlessly.
<snipped stuff on legal code >
> Of course you could always say (arbitrarily) that I was
> reacting to the best of my abilities to the best of my
> knowledge ergo my action was moral by my system of
> morals/ethics.... But I tend to think of that as a cop-out.
Really? I think the key word here is 'tend'. How could you
put a moral obligation on someone to act better than the
best of their abilities and knowledge?
Do you think there is a moral sphere separate from the
legal sphere? Some apparently don't. I think the legal sphere
is smaller than the morals sphere.
>
> > In relation to your secondary point (stated in this letter);
> > I really don't think morality has anything necessarily to
> > do with self-delusion, or the acknowledgement thereof.
> > Or rather, there is no truth that states necessarily you
> > have to be honest, ergo an act of dishonesty (as it
> > relates to self-delusion) does not violate any
> > particularly great truth.
>
> <brett> First the status of morality and the rationality of
> ethics is pretty widely regarded at least so far as I am
> aware in philosophical circles as being almost a matter
> of opinion.
> (eg. Bertrand Russell. History of Western Philosophy).
>
> <me> I'm sure it is; until you run into an event that
> requires faith or belief outside of rationality.
Actually I think BR would hold the line even in the face of
your example. But BR overlooked a few things as Godel
pointed out. Maybe he abandoned the search for a rational
ethics too early.
> Ergo if I'm marooned on a desert island for 60 years,
> does it really make a damned difference if I hallucinate
> marilyn monroe on the island with me in order to
> remain sane?
Legally no. Morally? Depends. By the code I've been arguing it
*would* make a difference if there was some net difference in
utility to you. ie. If you really *could* make the decision to
hallucinate to preserve your sanity (or not) then I'd argue the
moral choice is the one that you *think* will result in the best
outcome for you.
Whether it would in fact yeild the best outcome for you is not
the point as the facts of the outcome are not knowable to you
at the time you decide.
Now thats the moral code. The reason for the moral code is
that usually judgements will be required in real life which one
cannot anticipate and the better, the more sophisticated your
moral code the better, (the more enlightened) your judgement
of your own best interests will be.
In this particular case I don't think there is much latitude for
immoral action as you really would be alone on the desert
island in the situation you stipulate. Of course the situation
you stipulate could not arise. One would never know one
was going to be marooned for sixty years AND CHOOSE
to hallucinate. Hallucinations for the most part are going
to be dysfunctional even on the island.
> I tend towards a function and dysfunctional definition of
> sanity; if its dysfunctional, then you are not being ethical.
This seems to be confounding sanity with ethics. Which is
problematic if the insane cannot make sound judgements
in their own interests by virtue of being insane.
> if its functional, you are being ethical.
> and since functionality is related to the environment you are
> operating in, ergo my comment about self-delusion not really
> having anything to do with morality.
>
> I definately think everyone is engaged in it to some degree
> (self-delusion), and to the extent that it helps you, its ethical
So do I. The human condition is mortal. It would hardly
behoove us to dwell on it excessively and abandon hope when
there was none. Perhaps the illusion of a life after death only
becomes dysfunctional when it gets in the way of the realisation
of longer life in practice.
In the vast majority of cases self-delusion *is* going to be
harmful. In those circumstances where it is not harmful to
anyone including the person who is self-deluded then I'd
agree it not immoral.
>
> <brett> I find this conclusion (Bertrand Russell's)
> powerful, dangerous and deeply unsatisfying so I am
> keen to have at it.
>
> <me> I was just telling my little sister yesterday about one
> of the classical issues in my life (at one point during
> my younger years); at what point does being too intelligent
> start to harm you (the classic form being, if you could trade
> intelligence for gauranteed happiness, would you do it)...
> most intelligent people say no; I think the really intelligent
> people though, when they consider it, say yes.
I think this is pure speculation. Would a less intelligent you
be you? If you think so there may be possibilites for you
to chart a life for yourself that involves more happiness
and less intellect. But personally I don't think so. How do
you aim at happiness without identifying something that
will make you happy. Happiness is not itself a thing that
can be persued.
Happiness is not a object its a consequence. Doing things
makes us happy. Actually drugs and stimulation to neural
centres probaly make us happy in a sort too. But these
seem to involve short term happiness (pleasure) for long
term happiness. (I think I'm picking up Bishop Berkley here
but could be mistaken).
> This is of
> course, assumes that people intuitively seek to maximize
> their utilities, and said maximization of utility defines a
> state of happiness [which is, I think, reasonable]...
I don't I think its premature at best and problematic at
worst. One cannot be happy without a cause. Happiness
per se is not persuable. Pleasure is. Lesser levels of
sentience are. But I doubt these are what appeal to you
as an alternative.
> Ergo self-delusion
> is only dangerous if you can't discard the delusion when
> it becomes detrimental to your pursuit of happiness.
Can you think of a form of happiness that you could choose
to pursue where happiness qua happiness was not the goal?
I think you will find that happiness is a side effect. It cannot
be approached directedly at though it were a destination.
It arises as a result of achieving or doing other things. Things
which are in accordance with our nature as rational social
beings.
> <brett> Ok. Now here's my point. Unless a moral code
> can arise form a set of universals such as a propensity to
> reason and a predispositon to sociability then there is
> not likely to be much genuine agreement between
> subjective individuals on moral codes.
>
> <me>Oh I'll *agree* that it is absolutely necessary
> to have something to that effect; I think the inclusion of
> a predisposition to sociability is where your moral system
> will fail, as in a fair amount of people are not (at some point
> in their lives) sociable....
I agree its the weak point.
> Any moral system you build on that
> premise is doomed to fail because it does not take into account
> actions by that subpopulation of people (antisocial individuals
> who are operating on a different ethical system). I would
> state that ur assumption that there is a propensity to
> reason is a reasonable one in that it is necessary for
> the ability to recognize other autonomous agents actions
> for what they are; expressions of their own moral/ethical
> systems..
Ah I think you missed my point. The potential to persuade
using the sociability aspect *is* far stronger when individuals
are powerless and my point is that the all those who are mortal
are now becoming *aware* that they possess a poor form of
wealth and power if it can't extend their life and health. There
is an opportunity there to get them to revisit the social compact.
But these cagey old survivers will not fall for bs. When arguments
are put to them that are not in their interest they will not buy into
them.
So in my view a moral argument cannot be put to a rich or
powerful individual unless it is couched in terms of offering
*something* for them. We live in an historic period. In this
period it might be possible to promote a policy of more life
for all or more life for none.
The alternative may be cabals of the powerful working to
'immortalise' themselves. Such a scenario may restart
"history" whose demise was greatly exaggerated.
>
> <brett> Further those
> who do not know endeavour to understand themselves,
> what manner of creature they are, are not going to be
> in a position to know what the most optimal compromises
> for them are when compromises need to be made.
>
> <me> I've met several people who are extremely intuitively,
> but unable to verbalize (form coherent sentences) expressing
> their viewpoints...they just know what is right for them, and
> what is not... how does your system encompass them?
They learn to reason and they learn to use language to persuade.
They learn to understand what they want. This gives them the
best chance to make their way and improve their situation as
they go. It does not guarantee them success. The universe in
which hard work gurantees success is not this one in my view.
> <brett> A person
> that deludes themselves that they are a different sort of
> creature with different sets of drivers and needs than
> they actually have is precluded from sitting down at table
> to negotiate for their own best interests because they do not
> know their own best interests. A person that deludes
> themselves willingly can hardly be a person that others
> would want to engage in a moral compacts with.
>
> <me> according to you :) to me its fine :) In fact, i rather
> like space cadets :)
> It not the space cadets that you have to worry about.
> Its the wizened old cynics and manipulators that figure
> that life is a bitch and that they are not going to join the
> ideological idiocy of submission. These guys have the
> power to fuck up all your plans and irronically shortchange
> themselves in their cynicism too. They are not greedy
> enough for life (according to this theory). There is a fatal
> foolishness in their cynicism. They may be happy to have
> ten more years of power in the forms that they have
> become accustomed too. They may rate their progress
> not against what it possible but against how big a differenc
> there is between them and the common man.
>
> The logical consequence of this line of thinking unchecked
> is the formation of power cabals and starker separations
> between the haves and the have nots (in which ironically
> even the haves will have less than they may have had).
>
> <me> I should give you a friendly warning at this point;
> I am generally where people's morals systems (day-2-day)
> break; they invariable adopt (when dealing with me) mine
> as it is impossible to force me to adopt theirs, and generally
> speaking, I am a fairly productive person (and very difficult
> to avoid if your moral system ends up conflicting with mine).
> I say this of course, in a friendly manner, because it will
> give you an insight to my fundamental objection to your
> system....that it, it is not sufficiently broad enough to
> encompass all useful people (non-functional insanity excluded).
>
> <brett> Within the context of the time available I think
> individuals ought to use pan critical rationalism in
> their approach to understanding their own nature and
> then pursuing the optimal outcome for them in accordance
> with their nature.
>
> <me>....which may end up not being rational (to outside
> examination)....
>
> <brett> This optimal outcome will necessarily
> involve compromise because others whose cooperation
> is needed will not if they are enlightened and empowered
> (and likely to be good allies as a consequence) strike
> bad bargains.
>
> <me> I would agree with this sentence :) compromise
> is often called for amongst people if they hope to culture
> friendships that will be substantial in trade.
>
> > <me>
> > Or to put it more bluntly, sometimes self-delusion
> > is the ticket :) Ever wonder why (evolutionary-speaking)
> > we have emotions?
> >
>
> <brett> I reckon to the extent one is self deluded one will be
> screwed at the negotiating table because perceptive others
> will recognize you as having a lower sentience quotient and
> the only one to blame for this if you are self deluding will
> be yourself.
>
> <me> depends on how intelligent they are. some forms of
> self-delusion are bad, some are good. some are bad generally,
> but useful in achieving short-term goals...
>
> <me> fundamentally though, I think you're exclusion of
> people who are self-deluded though, would end up eliminating
> all of humanity... and ergo ur proposed moral system would
> fail in that it would not encompass anyone... and if there
> was a completely rational person (at some point), than they
> would probably end up suicidal.. Of course this is conjecture
> on my part (assuming everyone is self-delusional to a degree)
> but I think it is bourne out by the nature of being human.
>
> > [brett]
> > Now against this point it might be argued that there
> > are no circumstances where dishonesty with oneself is
> > a moral matter. I conceed that this is the traditional view but my
> > contention is that that traditional view is wrong, flawed, and lacking
>
> > in utility.
> >
> > <me>
> > Fair enough, I'll bite :)
> >
> > [brett]
> > I am arguing that only those that can commit themselves
> > to hold themselves to a rational moral code are in a
> > position to have the sort of maturity that is required to
> > forge the sort of compacts that will best serve the
> > strongest forms of cooperatives and the most extropic
> > societies.
> >
> > <me> substitute "ethics" in for "morality" and I'd agree;
>
> <brett> So far I've been using morality and ethics without
> distinguishing them particularly.
>
> <me> yeah I noticed :) I was trying to disinguish the two
> for the purposes of this dialogue :)
>
> > morality to be is something generally provided for people
> > by external sources (rather than derived by said people);
> > it also deals heavily with intention. Now, I *really* don't care what
>
> > people's intentions are, just their actions.
>
> <brett> Of course you do. A person that you know was intending
> to steal from you yesterday but did not for lack of an
> opportunity is likely to be filed as such and regarded as
> such. If not you'd be suckered far more often than the
> norm.
>
> <me> "an honest man is just a thief without opportunity" :)
>
> > Intention are more of a heuristic to decide whether or
> > not someone's future actions will be favorable.
>
> <brett> They certainly are that. And a persons future actions
> , their motives, their reputation are surely part of practical
> moral deliberations on your part.
>
> <me> nope. I deal on a tit-for-tat basis. Either people adhere
> to a basic set of standards, or they disappear from any sort of
> significant interactions (if I itch, I scratch). There's generally
> a ramp-up period of re-training. They are free to interact with
> others, however they wish. I do not interfere unless called upon
> by another (involved) party. I am *always* fair.
>
> > This discussion could all
> > be simplified by people adopting a non-intention based
> > system, where people are judged by their actions, and
> > statements made by said people are evaluated in that
> > context (as actions).
>
> <brett> I am trying to establish the validity of the statement to
> thine ownself be true. You, I think, are trying to make me
> make the case or to refute it. I am not interested in a simple moral
> system
> I am interested in a rational, teachable and extensible one.
>
> <me> Its rational, teachable and extensible. In fact, its more
> teachable
> than a complicated one.... There are no gray areas.
>
> <brett>If I can teach a moral system that has high
> utility to those I teach it too, like teaching the principle of tit
> for tat, then I can be confident that the world will be that much more
> pleasant and predictable for me as a result.
>
> <me> evaluate my suggestion within the context of whether or not such
> a system would be fit ur standards and offer high utility... I would
> suggest, humbly, that it does...
>
> > [brett]
> > I do not imagine that any of us ultimately succeeds in avoiding self
> > delusion. But if the charge of hyper-rationality is ever a valid
> > criticism I do not think it can be so on matters of morality where the
>
> > individuals concerned acknowledge that their take on the universe is
> > inherently subjective and inherently selfish.
> >
> > <me>
> > I think there are degrees of self-delusion; I think
> > more important than self-delusion is the end effect that
> > self-delusion has on the person as a total system.
>
> <brett> In almost all negotiations, and most moral matters
> between persons involve a quid quo pro, the best possible
> outcome for the individuals involved depends on them
> recognizing the best possible outcome for them (without
> their being deluded as to what they want or need) and
> then most closely approximating it.
>
> <me> I think perhaps, we mean two things by being deluded..
> You are referring to a final state of utility reached
> by a rational deduction of what your particular utility table
> is... I am referring to it as a person who has already determined
> their utility table, but reached it through a non-rational pathway.
> Ergo for you, someone who is self-deluding themselves is incapable
> of understanding what their true utility tables are... whereas for me,
> a person who is self-deluding themselves is a person who is mapping
> their stimuli (internal & external) to alter an unpleasant situation to
> a state of utility...where said mapping may not be based either on
> a rational inference chain (thinking), or even by rational examination
> of ones self... What do you think? Can you clarify exactly what you
> mean by self-delusion?
>
> > [brett]
> > It is my contention that if we cannot find a harmony of
> > selfish interests we will not find anything but the illusion
> > of harmony at all.
> >
> > <me> in other words, someplace where everyones needs
> > are met...
>
> <brett> Absolutely not. That is pure fantasy land. The best we can
> hope to achieve is a reasonable compromise where all sit
> down in good faith and recognizing that all are compromising
> but all are gaining in the aggregate all act in accordance with
> the agreement.
>
> <me> in other words, someplace where everyones needs
> are met...to the best of the collectives ability?
>
> > [brett]
> > And in order for their to be a harmony of selfish interests
> > their must be real recognition of the nature of oneself
> > and ones needs.
> >
> > <me>
> > or a mapping of sensory input to available physical stimulus.
> > thats another possibility, sans recognition of one's own selfish
> > interests.
>
> <brett> I wouldn't limit one's self understanding and ones understanding
> of ones needs and desires to the mere physical.
>
> <me> I certainly wouldn't extend it to the metaphysical :) Brain
> processing
> is a physical process. Its influenced directly by physical processes..
> Including meta-thoughts (working on ur concepts directly with a higher
> level grammer).
>
> <brett> I think there are social needs in most people that are very deep
> seeded
> and go beyond the need for physical contact.
>
> <me> I wasn't referring to that :) I was referring to altering the
> reality you're
> living in (cognitively) by biofeedback, but on an subconscious level.
> Something
> akin to the Siberian train conductors lowering their metabolism, based
> on when
> the train was going to pull into the station [arrive].. but more
> sophisticated..
>
> > Same goes for empaths (people with highly developed
> > abilities to sense what others are feeling off of physical
> > [body, speech] cues). They intuitively understand people
> > and respond sans a specific rational understanding of those
> > people. There's no reason (any empaths out there?) to
> > think that emotion-intuition is not being applied to
> > themselves (the equivalent of reflection).
>
> <brett> I am not sure what your point is here. Sociopaths are also
> good readers of patterns. They just don't empathise. But
> I would argue that sociopathy is dysfunction.
>
> <me> Depends on whether or not sociopaths can manage to
> contain their more destructive impulses (or rather, get away
> with it). There's a subpopulation of sociopaths who are
> quite successful, who manage to sustain their needs by
> twisting the letter of the law to suit their need. Its not
> necessarily a dysfunction. + on top of it, it depends on whether
> or not a sociopath is bothered by prison. Who knows?
>
> <brett> And real sociopaths would possible grin at me
> and say who needs that sentimental bullshit anyway.
> And I'd say you do here's why ..etc.
>
> <me> I think I can say very fairly that sociopaths do not
> think like normal people; especially (or rather, primarily)
> when it comes to their utility tables... There's no basis
> for comparison :) Ergo why I brought them into the
> conversation...
>
> > [Brett]
> > This is where I think it becomes important
> > to acknowledge to oneself that one can be rational and
> > that one is by nature social. If one does not acknowledge
> > that one is social one is not (by my reckoning) being true
> > to oneself and one does not have the sort of maturity
> > that will enable one to be on good terms with oneself
> > and to form real compacts that have a chance of being
> > honored with others.
> >
> > <me>
> > Ooooh I don't know about that :) You seem to take
> > that people are by nature, social creatures. I don't
> > necessarily think thats the case. Or to qualify, people
> > are social by a matter of degree.
>
> <brett> Sure but there is a baseline.
>
> <me> go tell that to the unibomber.
I would have been happy to point out to the unibomber that
he was born social, so much so that he couldn't raise his head
to feed.
Then I'd ask him where his sociability ended. It could be an
insightful conversation.
>
> > Some are quite capable
> > of going it alone while others would die if seperated
> > from the herd.
None are capable of going it alone yet. But here is the point
in the future we may re-engineer ourselves to such an extent
that some may indeed feel capable of going it alone. And these
indivduals will not necessarily be able to be reasoned with
with the same starting premises. These individuals may become
meglomanical persuing as the culmination of their individuality
dominance over all else. Because, if you have no empathy
- why the hell not?
>
> <brett> Only after some initial basic social assistance has
> been rendered.
>
> <me> We're dealing with [dynamic] adults here, no?
> You don't intend to limit your morality to only those who
> are currently social? Nor do I intend to convert the bad eggs
> to altruism. I see it as far more useful to persuade the good
> eggs that if they do not want war with the bad eggs they had
> better acknowledge that principle 101 for the bad egg is
> likely to be what is in this for me. If there is not an answer to
> that question then conflict will come about.
>
> <brett> Many infants get suboptimal social assistance and the
> outcomes are often dysfunctional people. But they are not
> dysfunctional by choice.
>
> <me> but they're still dysfunctional, at least, according to what
> currently passes for a majority of society.
>
> Yeah. And society pays the price. A more enlightened society
> might see better economies in avoiding the dysfunctional early
> socialisation.
>
> > So i question ur assumption that everyone
> > is social.... Its obviously a core belief in ur system, and certes,
> > generally speaking, it is the case that most people are social.
>
Everyone is social to a degree. Am I really saying that everyone
is reachable through their residual sociability. I doubt it. I think
nature throws up dysfunctional types of all forms and some
genuine sociopaths can probably only be dealt with as amoral
threats.
>
> <brett> Belief has nothing to do with it. I have learned and
> observed human infants I know that physiologically they
> cannot survive without assistance - that they wish to survive
> - that they suckle if they can and cry if they can't is not a
> matter of mere belief.
>
> <me> I'll agree with ur assessment on babies... now whats
> the relevance to your moral system?
Babies will go with the flow and be sociable until their selfish
desires are frustrated. Then they have to learn to compromise
or they engage the world with power plays and as babies they
loss. But some learn enough from the lesson to grow older and
play better and win for a time. But in the end in 2003 the
default prognosis is that all of us are dead. The premium is
on cooperation. To get people to cooperate you have to respect
their rational desire to look after their own interests. You have
to offer them deals that really are win win, or you have to expect
that their will be conflict and they have no moral obligation to
buckle under and surrender their resources and power.
>
> > But not all.
>
> <brett> Not all to the same degree. But there is no person
> alive at present (to the best of my knowledge) with the power
> to stay alive without cooperating with others.
>
> <me> but you acknowledge that it is a possibility?
Yes. Imo that is a possibility. For me the interest in morality
is linked to an interest in politics and in the means by which the
possibility may more become a probability in my life time.
Hey I am social, but I am also rational, and I am in this for
me :-)
> <brett> It is not necessary that social be about niceness
> it is better, more funcitonal, if it is about an enlightened
> understanding of frailty and the benefits of cooperation.
> I would argue that tyrants that aim for the short glorious
> life of Archilles in 2003 are short changing themselves.
> They are sub-optimally selfish. With a tweak of their
> value systems they may be able to satisfy more of their
> needs and desires by cooperating. But many of them
> would have to re-learn and I'd expect few of them to
> change what has worked for them if they could not be
> presented with a compelling argument. If there is no
> compelling argument that can be made to their self
> interest then I would say that no real moral argument
> is being put to them at all.
>
> <me> ....except to say that they have presumeably
> successfuly satisfied their own utility tables....
No they optimised. But I'd argue they have sold themselves
short. They could and may in may cases yet achieve more.
>
> > [brett]
> > If there was a creature that by nature was not social in
> > any sense I would grant by my notion of morality that
> > that creature would have no duties to others and that
> > that creature would not be acting immorally in anything
> > it did to others. If one is sure that one is being
> > threatened by a genuine sociopath by my moral reckoning
> > one would not only be permitted to act in ones defence
> > one would be morally obliged.
> >
> > <me>
> > see now I wouldn't go that far; just because ur being
> > threatened by a sociopath does not necessarily mean they
> > will carry out that act; there's a whole subset of sociopaths
> > that lead "normal" lives without going through the murder
> > sprees that characterize their (by our definitions)
> > less-successful brethern. I think thats more of a policy issue
> > (to be decided upon by each individual)....
>
> <brett> That is exactly right. In the end the individual must
> decide moral policy for themselves. The intelligent individual
> will take into account existing social mores and laws but in
> the end they will not shirk the responsibility of the moral
> decision. They cannot. To shirk is to allow defaults to go
> into play.
>
> <me> yes but ur system implies a judging of that moral code
> (at the end of the day) by other members of that society...
> so individual morality is irrelevant if the rest of the group
> does not consent to that action as being moral...
No because we *are* social we learn at least some moral
codes as we becomes socialised. We maybe go through
something like Kohlbergs levels of moral reasoning.
We learn terms like utlilitarianism and consequentialism
and from this social stock of idea on moral codes we
fashion our own. We don't invent from scratch.
I'm suggesting we van bet better moral codes into the
ground water. These won;t remove from individuals the
need to make moral judgements but they will increase
the likelihood that reason and enlightenment are brought
to the process of agreement making and law making.
> <me>my overall point is that saying you're action
> (by your own moral standard) is moral is trivially
> easy to do; convincing
> others is far more difficult.
Absolutely. I am in my view doing a pretty poor and
very longwinded effort at that here. But perhaps in
working it through like this I will be able to distill it
into something shorter and more convincing because
it will appeal to people as a sort of empowering
knowledge. Like probability theory, or tit for tat.
>
> > [brett]
> >
> > In practise I would have some residual doubts about
> > the completeness of the sociopathy of even a creature
> > such as Hitler so I would not feel completely free to
> > exterminate him with extreme prejudice unless I had
> > made a good faith reckoning as to the nature of him
> > as a threat to what I value. Then having made a
> > best a rational determination of the nature of the threat
> > as I could given the time and context I would feel free
> > to exterminate him with exteme prejudice and I
> > would expect to feel no guilt but only some misgivings
> > that had I more time I might have judged better. ie.
> > My concept of morality is I think in that sense
> > practical. And it is extensible. If others share it,
> > if they act rationally and in accordance with their selfish
> > best interests as they perceive it I can (in the context)
> > of this moral system have not fault them morally.
> >
> > <me>
> > now don't u see a contradiction therein? What if
> > the sociopath, or even loony person (to broaden the set),
> > is merely acting to fulfill his own utility (ergo munching
> > on ur spleen or the like)? I mean, just because someone
> > else is "selfishly" (is their any other way?!) pursuing
> > there own interests, doesn't necessarily mean ur own
> > moral code should approve their own...
>
> <brett> No my moral code would tell me if this person
> is reasonable I can point out that their aspiration to munch
> on my spleen is well recognized by me and that that is not
> a circumstance that I can permit to prevail. Either we
> reason out a conclusion together or we fight to the death
> now. I then invite them to do their calculations of
> cooperation vs competition and consider how the
> agreement if it is to be cooperation will be effectively
> honored. If at any stage I feel that my appeals to their
> reasoning is hopeless then I fall back on trying to kill
> them before they kill me.
>
> <me> what, you offer them half ur spleen? you're moral
> code has just failed to offer a suitable compromise to
> another rational autonomous agent... whereas his
> morality disregards yours, yours fails to achieve its
> basic premise... that of being attractive to others
> rational beings...
No I offer them the viewpoint that getting my spleen
will come at great risk to them and provide them
little nourishment. I suggest that they are better of
regsrding me as a resource and seeking to cooperate
with me. I am pretty resourceful and persuasive.
In many cases I'd expect to pull it off because I would
really find ways to cooperate. But in some cases
the universe is a bitch. If the other guy seems me
as food and I can't persuade him otherwise his
circumstances and mine may genuinely be that
desperate. Then, one of us will die and one of us
will eat.
>
> > [Paul]
> > > Pretty much the only time u can consider something
> > > moral or immoral is after the event has occurred,
> > > and then, only for urself. Morality has absolutely
> > > no import in a pre-emptive doctrine.
> >
> > [brett]I don't agree. By my reckoning of morality, when
> > individuals agree to cooperate with each other for their
> > mutual advantage (perhaps at some cost to them on
> > other dimensions were they reckoning their best
> > interests separately) there is a moral bond between
> > them.
> >
> > <me> according to ur definition of morality :)
>
> <brett> Yes. According to a system I'm offering up
> for consideration because I think there is some
> consistency and utility in it and because if I am right
> and it is teachable I will benefit by shifting the
> cooperate (or) compete decision more towards
> cooperation (just as if I had taught the principle
> of tit for tat).
>
> <me> .. but is it the optimal solution?
The optimal solution for me, for the other of for
both of us? It is in my view more likely to be the
optimal solution in more circumstances because it
is the more rationally approached solution and
by being rationally approached we can consider
each others real needs and measure each others
real willingness to compromise AND at the
end of the day we can always fall back on the
option to compete. Come that unfortunate outcome.
I would compete *very* hard.
>
> > [Paul]
> > > Anyone that believes to the contrary has not
> > > rationally examined the situation.
>
> <brett> Depends what you mean by belief. Belief
> is a problem word for me because a lot of people
> who are doing more than mere believing use the
> word belief to indicate a sort of less than certain
> knowledge. The problem is that some people
> that use the word belief may have done no
> personal processing on the issue at hand at all but
> may have simply adopted wholesale something that
> they were indoctrinated with.
>
> <me> the latter part of that paragraph is what
> I'm implying; that any rational person who has
> (without resort to emotion, or any rationalization
> or justification) examined the concept of acting
> pre-emptively (sans sufficient proof or knowledge
> of the current environment) is reasoning falsely.
Pre-emption is slippery. Consider the word
pretext. Give a politician a pretext and they are no
longer arguing pre-emption they are arguing reasonable
and measured response.
>
>
> <brett> If you are implying that my proposed moral
> system is flawed or inconsistent or unclear then, yes,
> I am willing to accept that that could be in fact a valid
> criticism but I'd ask you to point out where because
> as I've said trying to find means of increasing cooperation
> and putting morality and ethics on a more rational footing
> is a worthwhile task.
>
> <me> three points really;
> a) it doesn't make a difference how you arrive at an API,
> it is only important to establish one (rational or otherwise).
>
> There is no reason that an emotional or intuitive based
> person cannot maximize their own utility while being
> consistent from an external point of view of their behaviors.
>
> Ergo, consistency is key {even more so than cooperation,
> becaue if you really want somebodies cooperation, you
> will invariably have that already incorporated in ur utility
> table [by virtue of it being a necessity to fulfill some other
> greater utility] and ergo you will have something to offer in
> trade}.
I think it does matter that its rational not emotional or faith
or belief based because reasoning facilitates coomunication
and understanding between sovereign agents far more
effectively. Reason has as its tool language. I am not pooh
poohing emotion. Emotion is what makes life worth living
by emotions cannot be conveyed in the same way as
reasons. Nor are they are reviewable and reliable in making
judgements. In science we do well to acknowledge that out
emotions can mislead us. I think we do well also to acknowledge
this in the formulation of contracts and laws.
>
> b) it should successfully deal with all types of people; that
> includes people who want to munch on ur spleen, and
> people who are complete loners[antisocial], and even those
> that aren't rational [non-functionally insane]
It does those who finally want to munch on my spleen I regard
as forces of nature like sharks or lions. I don't regard their desire
to eat me as immoral but I definately regard my desire to avoid
being eaten as just and I feel fully free to exterminate such with
extreme prejudice.
There are no *complete loners* that I am aware of. Yet the time
of the complete loner is likely to be in the future and these guys
cou;ld be very dangerous. For now dysfunctional loners that
poss a threat, when they poss a threat are fair game for
taking action against.
There are patterns in most forms of insanity. I'd take my
understanding of the particular person and their alledged ailment
into account and go from there. The code doesn't prescibe a
solution to all it only provides a (better I hope) framework.
Individual judgements still need to be made.
> c) the API should be predicteable...having a code of laws
> where no-one can predict whether or not they are in
> compliance is pointless. It doesn't make a difference
> whether or not they agree with the fundamental
> reasonings, they should be able to follow the statements
> out to their (hopefully logical) consequences or in barring
> that, have a simple intuitive model to guide them.
>
I think my systme is more predictable than others. Those
thatt present rationally are quickly processes rationally for
cooperate options and compete threats. Those that don't
operate in a rational paradign still fall into patterns. Animals
are not rational, inanimate objects are not rational, if a person
behaves in an irrational way they can often make themselves
grist for my mill or the mill of others that do act rationally
and with an eye for the political.
> > [brett]To be frank, I am doubtful that the word belief can
> > be validly coupled (except as crude linguistic
> > shorthand for "this is my operating hypothesis") with
> > a rational examination of any situation. Belief is often
> > used by fairly rational people in just this short hand
> > manner.
> >
> > <me> belief => a statement a rational agent holds true.
>
> <brett> Many use it like this. I don't like it because I spend a lot
> of time considering the politics of language and communication.
>
> <me> I don't; I use a word sans negative or positive connotations.
> Descriptive, accurate and precise. The sentence is where I pass
> judgement on the thought, not the word.
I accept that that is often true.
>
> <brett> I think that if an extrope is debating with a flat earther in
> front of an open minded audience and the audience is only partly
> paying attention and they hear the extrope talking of beliefs on the
> one hand and the flat earther talking of beliefs on the other the
> audience may be seduced into thinking one belief may be just as
> good as the other. I think it is in our interests to get some
> probability
> and quantifiability into the discussion. Belief is language which
> serves the preservation of the status quo.
>
> <me> I agree; if you are ambigious on how I am using a word,
> just ask for clarification.. vice versa on my end, assumed, of course.
>
You miss the political point. The audience is the voting public. It is
up to extropians to convince them of our relatively non-conservative
agendas or tohave to wear the policies that are put in place.
It is for us to be smart in our communications or wear the
consequences ofnot being. When we use the word belief we weaken
our cases where they should be strongest -we usually have reasoned
and we unlike our opponents (should be) open to the superior
argument.
>
> > [Brett] By the code of morality I have tried to
> > describe, belief qua belief is immoral. This is because
> > when one is believing one is not reasoning and when
> > one is not reasoning to the route of ones selfish best interest
> > one is groping with a less than optimal method.
> >
> > <me>
> > depends on how u define it :)
> >
> > Yes. And I don't think extropes generally define it as
> > I do, but my point is that people who hear belief being
> > used may be people we are trying to persuade and it
> > behooves us to use the most persuasive language.
>
> <me> disagree. more important than persuading other
> people is establishing a way of communicating clearly
> and unambigiously, preferably with some training
> into how to accurately convey your thought processes
> to another, and how to detect (and request clarification)
> when others are using a word to convey a different
> semantic construct.
Often the extropic message is a most persuasiveluy put
when it is put in terms of clear and unambiguous
communication.
I have personally seen huge decisons on national policy
"justified" by political leaders not on the basis of the
evidence but on the basis of belief.
> Ergo I never argue persuasively, only defensively :)
> Lawyers argue persuasively, and just about everybody
> I know (excluding lawyers) hates dealing with them
> [ergo a necessary evil]....
Very much so. Fact is the legislation that exists in our
countries is established in a particular way. There is no
point wishing it were otherwise, it is as it is. Therefore
I'd rather be an effective lobbyist for the policies and
laws I want to see enacted (or not enacted) then not
be.
>
> > Belief is a poor word for conveying significant amounts
> > of intellectual exercise.
> >
> > And certes, just because
> > you believe something doesn't necessarily make it false
> > (or ill-advised); classic example, I believe the sun will rise
> > tomorrow morning... Of course the veracity of that statement will
> > require observation tomorrow morning; but the belief is both advisable
>
> > and statistically speaking, fairly certain... In other words, belief
> > and logic are not necessarily at odds; it depends on how you define
> > it.
>
> <brett> My point is that when you speak in a political forum you
> do your own thought process which is based on considerably
> more than mere indoctrination by another (I hope) a disservice
> when you use the word belief instead of another word.
>
> <me> I see; you're making a broader point than are discussion :)
Yes. Sorry I get evangelical sometimes.
>
> <brett>(This is because not everyone who hears you use the word
> belief knows that you will have done more processing.
> It seems to me that many extropes fail to realise that the audience,
> the rest of the world doesn't give away free credibility points
> for one wacky belief over another.
>
> <me> for me, personally, in day-to-day interactions, people always
> quickly realize that any thought I express has been well-thought out.
> I have not, of course, attempted to communicate in a traditional public
> forum {press, tv, etc} other than internet-based forums (list-servs and
> usenet). Incidentally, mostly because I'm not really trying to persuade
> anybody of anything outside of my scope of interaction. I'm primarily
> interested in getting my viewpoints out, to confirm whether or not there
> are any flaws in my thinking that I have missed, or to flesh out an idea
> by gathering other people's inputs on the subject, generally through
> anecdotal experiences... The scientifically-based stuff I get through
> journals and books.
Perhaps you are still relatively young and have yet to grow your political
teeth. This is fair enough. If you come to see the connection between
the stuff we aspire to and the legislations that goes through nation
parliament on stem cells and nanotechnology and intellectual property
you may see things differently. I do not mean to be patronising.
Actually I'd like to be empowering. Many young extropes could be
potent political forces for change in their own individual right if only
they perceived the need and put some time into acquiring the skills.
> > [brett]My contention is that as soon as one becomes
> > a "believer" one has ceased to hold to the principle
> > of to thine own self be true - unless one is incapable
> > of reasoning - (or one must reach a tentative
> > conclusion based on the imperative to live and
> > act in real time).
> >
> > <me> hahahaha :) re - ur last qualification :)
> > well since we're all stuck in this current universe... :)
>
> <brett> Yes, but again I'd go back to pan critical rationalism.
>
> <brett> Without ever getting absolute certainty there are techniques
> which we can learn which give us a much higher probability of
> getting a correct (a useful) answer.
>
> <me> until you discover that extreme you didn't consider..
> I'm an engineer (mentality-wise), so for the most part, I
> always have to build/plan for the worst case scenario..
> Theoretically, that means I have a smaller margin for error
> before I'm willing to sweep it under the rug as not worth
> planning for.
Soory don't follow. Sounds right but don;t see the
relevance.
>
> > > Generally speaking, I have no use for morality;
> > > just ethics [standard api, consistently adhered to,
> > > logically derived, based on reality]....
> >
> > [brett]I'm reading api as 'application programming interface'.
> >
> > <me> yuppers.
> >
> > [brett]"Generally speaking" I suspect you are unlikely to
> > enjoy discussing morality and/or ethics much further
> > with me ;-)
> >
> > <me> it doesn't really bother me, if thats what u're asking :)
>
> <brett> I was asking. I don't enjoy boring people, I just risk it ;-)
>
> <me> thats fair :) generally the conversation ends up dying out when
> nobody bothers responding :)
I expect that will be after this post and thats fair enough. It has
been good to try and write down some stuff and try and get some
ideas straight. Or straighter.
> > but I've pretty much made up my ethical system, at least
> > in terms of the larger ruleset (meta-rules)...
> > some of the smaller "behaviors" are data-driven
> > (tit-for-tat, etc) :)
>
> <brett> As indeed in practice most of us have. If I am right
> and a better more universal ethical system can be derived
> I would expect that in most peoples cases there would be
> very little observable differences in how they'd behave.
> But then on the other hand when one starts to routinely
> reason as oppose to believing one is in a position to
> converse mind to mind with other reasoning beings.
> Beliefs can very easily become entrenched positions.
> I think to reason when reason is available is more
> social and because I think humans are social (their
> interests are best served by cooperation) to reason is
> moral to believe is not.
>
> <me> I agree with everything but ur last statement :)
> as I said, give me an simple, robust API anyday.
> It doesn't matter to me if there is a one-to-one mapping
> between it and some derived, generic ethical system,
> or there is a many-to-one mapping. I generally prefer
> rationally based systems in that the API happens
> to conform to reality [generally the other requirement,
> don't want to end up getting killed or maimed for my API]...
To use you terminology I'm more concerned with the
consequences of trying to push forward with a suboptimal
API.
> I dunno, overall, I have some fairly big problems with
> your API.
I can tell. But thanks for persisting its helped me clarify
my thoughts.
>
> I think more than anything else though, its that social
> requirment thing... :) Then again, I've been described
> by several people in my life as a human computer...
:-) I agree the social bit is the weaker bit.
Regards,
Brett
{PS: No reply expected - frankly it would scare
me to have to revisit this thread at this length again
soon - I need to break it out, seriously edit and
see what happens next}
This archive was generated by hypermail 2.1.5 : Tue Aug 05 2003 - 13:05:45 MDT