RE: To thine ownself be true?

From: Paul Grant (shade999@optonline.net)
Date: Tue Aug 05 2003 - 21:21:18 MDT

  • Next message: John B: "Re: Fermi "Paradox""

    -----Original Message-----
    From: owner-extropians@extropy.org [mailto:owner-extropians@extropy.org]
    On Behalf Of Brett Paatsch
    Sent: Tuesday, August 05, 2003 2:22 PM
    To: extropians@extropy.org
    Subject: Re: To thine ownself be true?

    Paul Grant <shade999@optonline.net> writes:

    > <brett> There are some classes of pre-emptive
    > action made on the basis of genuinely held, earnestly
    > reasoned (note I am not touching *belief* here) views
    > that would require action in my view.

    > <me> trying to justify a pre-emptive measure on the
    > notion that is "genuinely held" or "earnestly reasoned"
    > is a rationalization in my opinion, generally to excuse
    > the type of behavior you are engaging in...

    <brett>By including the word 'generally' above aren't you in
    fact conceding my point? I.E. in *some* specific
    circumstances pre-emptive action *is* morally justified?

    <me> reread what i wrote; I clearly state it is a rationalization,
    and then apply generally by way of a cause. I am simply stating
    that there may be other reasons to rationalize other than seeking
    an excuse for preemptive action...

    > the limit on this line of reasoning though, is in the
    > duration of the act
    > .... for instance, say you were prescient, and saw a man
    > who was going to mug you (with a knife) 10 minutes
    > from now, and hit him over a head; then you would be
    > acting morally (given ur prescience). Lets say you are
    > not prescient, and you hit him over the head on the
    > possibility that he might mug you; than you are acting
    > immorally.

    <brett> In the real world, where our moral judgement is supposed
    to assist us, (or at least that is my contention) we are
    *never* fully prescient and so there is always *some*
    chance the suspected or likely mugger may not in fact
    mug us.

    <me> perhaps; in real life, I generally prepare for the attack,
    rather than instigate a pre-emptive attack.

    <brett> Assuming one values oneself, how can we do
    otherwise than weigh up the chances as best we can?
    My answer - we can't. Therefore the point becomes
    how best we can.

    <me> learn to recognize (and avoid) abuse and abusive
    behaviors.. that includes learning not to abuse others
    in an attempt to prevent said abuse.

    <brett> At this point I think its worth distinguishing between
    a moral code, which may be a preconsidered
    framework that one uses to help reach a particular
    moral judgement and moral judgements per se.

    <brett> There are *no* moral codes that provide definitive
    answers to all the moral dilemmas that arise just as
    there are no maps on a scale of 1:1, therefore whenever
    a particular moral judgement is required there is no
    dodging that the subjective individual must make it
    which or without the benefit of a more or less
    sophisticated moral code.

    <me> I see things in black and white; I don't have
    a particularly sophisticated moral code; just a
    sophisticated world-view. The rules themselves are
    quite easy. I might add that you can generate a complex
    world view from an array of black/white values... and
    if it is a matter of black and white values, then my moral
    code does have a 1-to-1 correspondence.

    > Lets say you are not prescient, and he is mugging
    > someone else (as it is apparent to you from your
    > vantage point), and you intervene by hitting him
    > over the head... Then you're actions may or may
    > not be immoral, on the basis that he may not be
    > the one doing the mugging, but rather, may be the
    > muggee.

    <brett> Actually I'd say in the circumstances you describe
    the person *has* acted morally, but with poor
    judgement, so poor in fact that they may be found
    to have acted illegally.

    <me> you would; I would not; the reason being that the
    proper action is to act to seperate them, without harming
    one party of another precisely because you are ill-informed.
    You should have a duty (under your moral code) to negotiate
    in good faith, and having said that, you have an obligation to
    do due diligence in an attempt to (as completely as possible)
    understand the situation. Properly exercised restraint is a
    remarkeably under-appreciated quality.

    > The point being that you have to consider the
    > granularity of the event, the knowledge
    > you had as an autonomous agent, the environment
    > you're in, and the action chosen, and the outcome of
    > that action...

    <brett> Sure. But the "you" in this case is a subjective individual
    using their own judgement, when such judgement may
    or may not be particularly good. So it also behooves us to
    consider the granularity of the *moral code* that is taken
    by many of us into situations where it can guide particular
    moral judgements.

    <me> judgement and morality are entertwined, no?
    you can't have a good moral code if you're judgement
    is consistently unable to judge according to that moral
    code.. As to granularity of a moral code, I think it is sufficient
    to establish a clearly understood (objective) test is in order
    to determine that level of granularity by which any particular
    moral code can be applied to...

    <brett>If one is running around in 2003 holding that the 10
    commandments are all the moral code that is needed
    one is going to come up against some particularly
    curly challenges in interpreting how to operationalise
    the directive that though shalt not kill.

    <brett>Even this simple edict is subject to interpretation. Life in
    2003 is known to take place on more than the organismic level.
    Cells are alive. And human cancer cells are human life.

    <me> sure if you don't consider that human cancer cells replicate
    through meisos rather than fertilization. of course (and this would
    be VERY interesting); what would happen if a cancer colony was
    able to generate say sperm.... would it be considered life...
    intriguing thought.

    <brett>Clearly it is absurb to argue that a cancer cell or a
    multiples of them are of moral weight with a person
    dying of cancer. Yet this is not much of an exaggeration
    beyond the proposition that say all embryos are a form of
    human life when by human life what is obviously meant
    is personhood.

    <me> I say tear it out of the womb and see if it survives
    unaided. if it does, boom, human being. if it doesn't,
    well then, at best it was an incomplete human being who
    died. There is no inherent reason to relegate women
    to the equivalent of biological life carriers.

    <brett>
    Before laws can be set that codify legally what may and
    may not be done it is prudent to have a moral disucssion
    where the words we use do not obfuscate the reals
    issues at hand. Issues such as how does a civil society
    weight the rights or potential persons (embryos, fetus etc
    at different stages). When we do not decide or address
    these questions public policy continues to be made on
    the basis of outdated moral and legal codes. And persons
    suffer needlessly.

    <me> Oh I would agree :) regarding a code of laws being
    unduly influenced by any one particular moral code. Personally,
    I don't think laws should attempt to legislate morality. It is
    up to people to do that.

    <snipped stuff on legal code >

    > Of course you could always say (arbitrarily) that I was reacting to
    > the best of my abilities to the best of my knowledge ergo my action
    > was moral by my system of morals/ethics.... But I tend to think of
    > that as a cop-out.

    <brett> Really? I think the key word here is 'tend'. How could you
    put a moral obligation on someone to act better than the
    best of their abilities and knowledge?

    <me> I don't in real life; I hold everybody to the same standard
    that I hold myself to.

    <brett> Do you think there is a moral sphere separate from the
    legal sphere? Some apparently don't. I think the legal sphere
    is smaller than the morals sphere.

    <me> I think the two are completely seperate systems; that is not
    to say a legal code cannot borrow from an established moral code
    (or vice versa in cases where moral codes are not derived from
    an unchangeable word of god).

    >
    > > In relation to your secondary point (stated in this letter); I
    > > really don't think morality has anything necessarily to do with
    > > self-delusion, or the acknowledgement thereof. Or rather, there is
    > > no truth that states necessarily you have to be honest, ergo an act

    > > of dishonesty (as it relates to self-delusion) does not violate any
    > > particularly great truth.
    >
    > <brett> First the status of morality and the rationality of
    > ethics is pretty widely regarded at least so far as I am
    > aware in philosophical circles as being almost a matter
    > of opinion.
    > (eg. Bertrand Russell. History of Western Philosophy).
    >
    > <me> I'm sure it is; until you run into an event that requires faith
    > or belief outside of rationality.

    <brett> Actually I think BR would hold the line even in the face of your
    example.
    But BR overlooked a few things as Godel pointed out. Maybe he abandoned
    the search for a rational ethics too early.

    <me> I never read BR so I can't really comment on his particular
    philosophical bent.
    my circuit on philosophy keeps getting delayed; I can't stand wading
    through stuff
    thats no relevant (given my stripped down operating assumptions). I
    think someday
    I'll get bored enough to get to the juicy parts [probably in some
    condensed
    version of all the philosophies].

    > Ergo if I'm marooned on a desert island for 60 years,
    > does it really make a damned difference if I hallucinate marilyn
    > monroe on the island with me in order to remain sane?

    <brett> Legally no. Morally? Depends. By the code I've been arguing it
    *would* make a difference if there was some net difference in
    utility to you. ie. If you really *could* make the decision to
    hallucinate to preserve your sanity (or not) then I'd argue the
    moral choice is the one that you *think* will result in the best
    outcome for you.

    <me> ergo my confusion since I thought you stated that self-delusion was
    immoral.
    and yes, you can make a decision to hallucinate... same as you can wake
    up (out of the
    equivalent of a dream) when you are rescued.

    <brett>
    Whether it would in fact yeild the best outcome for you is not the point
    as the
    facts of the outcome are not knowable to you at the time you decide.

    <me> I would say that they are; I would say that the only reason to
    willing
    self-delude yourself via fantasy or whatever is specifically because you
    have examined the outcomes and decided that reality sucked and you could
    do better. I would say that you would end up having to constantly
    (subconsciously)
    moniter reality to decide when you would come out of ur protective
    insanity.

    <brett> Now thats the moral code. The reason for the moral code is
    that usually judgements will be required in real life which one
    cannot anticipate and the better, the more sophisticated your
    moral code the better, (the more enlightened) your judgement
    of your own best interests will be.

    <me> again I balk at the requirement for a sophisticated moral code;
    you can have an incredibly simple moral code; but apply it through
    the use of a sophisticated world view (ergo ur judgement).

    <brett> In this particular case I don't think there is much latitude for
    immoral action as you really would be alone on the desert island in the
    situation you stipulate. Of course the situation you stipulate could not
    arise. One would never know one was going to be marooned for sixty years
    AND CHOOSE to hallucinate. Hallucinations for the most part are going to
    be dysfunctional even on the island.

    <me> .. for the most part? and I do dispute ur choosing to hallucinate;
    ever hear
    of self-hypnosis? If you can get urself into a deep enough trance, you
    can suggest
    post-hypnotic suggestions that trigger when you wake... including the
    post hypnotic
    suggestion to renew itself until you are rescued. And you can cause
    hallucinations
    in a deep enough trance btw. And thats not even calling upon the
    experience
    of mystics, or people who have sufficiently advanced manipulation
    techniques...

    > I tend towards a function and dysfunctional definition of sanity; if
    > its dysfunctional, then you are not being ethical.

    <brett> This seems to be confounding sanity with ethics.

    Not really :)

    <brett> Which is problematic if the insane cannot make
    sound judgements in their own interests by virtue of being insane.

    <me> ergo my distinction between function and dysfunctional insanity.

    > if its functional, you are being ethical.
    > and since functionality is related to the environment you are
    > operating in, ergo my comment about self-delusion not really having
    > anything to do with morality.
    >
    > I definately think everyone is engaged in it to some degree
    > (self-delusion), and to the extent that it helps you, its ethical

    <brett> So do I. The human condition is mortal. It would hardly
    behoove us to dwell on it excessively and abandon hope
    when there was none. Perhaps the illusion of a life after
    death only becomes dysfunctional when it gets in the way
    of the realisation of longer life in practice.

    <brett> In the vast majority of cases self-delusion *is* going to be
    harmful. In those circumstances where it is not harmful to
    anyone including the person who is self-deluded then I'd
    agree it not immoral.

    <me> ok well then we've settled that line of logic :)
     
    > <brett> I find this conclusion (Bertrand Russell's)
    > powerful, dangerous and deeply unsatisfying so I am
    > keen to have at it.
    >
    > <me> I was just telling my little sister yesterday about one of the
    > classical issues in my life (at one point during my younger years); at

    > what point does being too intelligent start to harm you (the classic
    > form being, if you could trade intelligence for gauranteed happiness,
    > would you do it)... most intelligent people say no; I think the really

    > intelligent people though, when they consider it, say yes.

    <brett> I think this is pure speculation. Would a less intelligent
    you be you?

    <me> does it make a difference if you are gauranteed to be happy?

    <brett> If you think so there may be possibilites for you
    to chart a life for yourself that involves more happiness
    and less intellect. But personally I don't think so. How do
    you aim at happiness without identifying something that
    will make you happy. Happiness is not itself a thing that
    can be persued.

    <me> ergo my point :) if you can't gaurantee happiness by
    method of intelligence, and some mythical blue genie (whom
    to the best of your abilities to discern is capable of granting)
    is willing to gaurantee ur happiness at the cost of your
    intelligence than any sane, rational person of sufficient
    intellect would not think twice. This assumes that everyone
    wants to be happy[maiximized utility table]... which I don't
    think is an unreasonable...

    > This is of
    > course, assumes that people intuitively seek to maximize
    > their utilities, and said maximization of utility defines a
    > state of happiness [which is, I think, reasonable]...

    <brett> I don't I think its premature at best and problematic at
    worst. One cannot be happy without a cause. Happiness
    per se is not persuable. Pleasure is. Lesser levels of
    sentience are. But I doubt these are what appeal to you
    as an alternative.

    <me> take a look at a little kid sometime :)

    <snip happiness is a sideeffect>
     
    > Any moral system you build on that
    > premise is doomed to fail because it does not take into account
    > actions by that subpopulation of people (antisocial individuals who
    > are operating on a different ethical system). I would state that ur
    > assumption that there is a propensity to reason is a reasonable one in

    > that it is necessary for the ability to recognize other autonomous
    > agents actions for what they are; expressions of their own
    > moral/ethical systems..

    <brett> Ah I think you missed my point. The potential to persuade
    using the sociability aspect *is* far stronger when individuals are
    powerless
    and my point is that the all those who are mortal are now becoming
    *aware*
    that they possess a poor form of wealth and power if it can't extend
    their life
    and health.

    <me> ... and these are the only people you're interested in approaching?

    <brett> There is an opportunity there to get them to revisit the social
    compact.
    But these cagey old survivers will not fall for bs. When arguments are
    put to them
    that are not in their interest they will not buy into them.

    <me> its been my experience that people who are very wealthy generally
    don't
    give a shit about social compact. They got (and preserve) their wealth
    at the
    cost of other individuals... I would agree that you need to put it in
    terms
    that involve greed.

    <brett> So in my view a moral argument cannot be put to a rich or
    powerful individual
    unless it is couched in terms of offering *something* for them. We live
    in an
    historic period. In this period it might be possible to promote a
    policy of more
    life for all or more life for none.

    <me> good luck getting the rich to give a shit about the poor (past the
    ones
    making off with their valuables)....

    <brett> The alternative may be cabals of the powerful working to
    'immortalise' themselves. Such a scenario may restart
    "history" whose demise was greatly exaggerated.

    <me> probably :) I hope to be part of a successful cabal :)
    but only if quality of life is high...

    > <brett> Further those
    > who do not know endeavour to understand themselves,
    > what manner of creature they are, are not going to be
    > in a position to know what the most optimal compromises
    > for them are when compromises need to be made.
    >
    > <me> I've met several people who are extremely intuitively, but unable

    > to verbalize (form coherent sentences) expressing their
    > viewpoints...they just know what is right for them, and what is not...

    > how does your system encompass them?
     
    <brett> They learn to reason and they learn to use language to persuade.

    They learn to understand what they want. This gives them the
    best chance to make their way and improve their situation as they go.
    It does not guarantee them success.

    <omd> and if the overall performance of this new system ends up being
    worse than their intuitive model? what then?

    <brett> The universe in which hard work gurantees success is not this
    one in my view.

    <omd> work smarter, not harder has always been my motto...

    > <brett> A person
    > that deludes themselves that they are a different sort of
    > creature with different sets of drivers and needs than
    > they actually have is precluded from sitting down at table
    > to negotiate for their own best interests because they do not
    > know their own best interests. A person that deludes
    > themselves willingly can hardly be a person that others
    > would want to engage in a moral compacts with.
    >
    > <me> according to you :) to me its fine :) In fact, i rather like
    > space cadets :)

    > It not the space cadets that you have to worry about.
    > Its the wizened old cynics and manipulators that figure
    > that life is a bitch and that they are not going to join the
    > ideological idiocy of submission. These guys have the
    > power to fuck up all your plans and irronically shortchange
    > themselves in their cynicism too. They are not greedy
    > enough for life (according to this theory). There is a fatal
    > foolishness in their cynicism. They may be happy to have
    > ten more years of power in the forms that they have
    > become accustomed too. They may rate their progress
    > not against what it possible but against how big a differenc
    > there is between them and the common man.

    <me> that is common btw; in people from all walks of life
    (competition versus a neighbor) versus global competition...
    it seems (given the broadness) that it is inherent to peoples
    point of views... besides the trick to manipulations is to learn
    how to do it urself.
     
    > The logical consequence of this line of thinking unchecked
    > is the formation of power cabals and starker separations between the
    > haves and the have nots (in which ironically even the haves will have
    > less than they may have had).

    <me> that assumes there is no such thing as a stead accretion of
    knowledge
    in the hands of a few. there is no reason to limit it to the life of
    any one
    person (or collection); you can consider your lineage for instance.

    <snip of antisocial/sociopathic stuff>
     
    > > [Brett]
    > > This is where I think it becomes important
    > > to acknowledge to oneself that one can be rational and
    > > that one is by nature social. If one does not acknowledge
    > > that one is social one is not (by my reckoning) being true
    > > to oneself and one does not have the sort of maturity
    > > that will enable one to be on good terms with oneself
    > > and to form real compacts that have a chance of being
    > > honored with others.
    > >
    > > <me>
    > > Ooooh I don't know about that :) You seem to take
    > > that people are by nature, social creatures. I don't
    > > necessarily think thats the case. Or to qualify, people
    > > are social by a matter of degree.
    >
    > <brett> Sure but there is a baseline.
    >
    > <me> go tell that to the unibomber.

    <brett> I would have been happy to point out to the unibomber that
    he was born social, so much so that he couldn't raise his head to feed.

    <me> granted :) but that doesn't really nullify the fact that he did
    become
    extremely antisocial afterwards...

    <brett> Then I'd ask him where his sociability ended. It could be an
    insightful conversation.

    <me> probably when he realized it was a losing game for his utility
    tables...

    >
    > > Some are quite capable
    > > of going it alone while others would die if seperated
    > > from the herd.

    <brett> None are capable of going it alone yet. But here is the point in
    the future we may re-engineer ourselves to such an extent that some may
    indeed feel capable of going it alone. And these indivduals will not
    necessarily be able to be reasoned with with the same starting premises.
    These individuals may become meglomanical persuing as the culmination of
    their individuality dominance over all else. Because, if you have no
    empathy - why the hell not?

    <me> why not indeed?
     
    > <brett> Only after some initial basic social assistance has
    > been rendered.
    >
    > <me> We're dealing with [dynamic] adults here, no?
    > You don't intend to limit your morality to only those who
    > are currently social?

    > Nor do I intend to convert the bad eggs
    > to altruism. I see it as far more useful to persuade the good
    > eggs that if they do not want war with the bad eggs they had
    > better acknowledge that principle 101 for the bad egg is
    > likely to be what is in this for me. If there is not an answer to
    > that question then conflict will come about.
    >
    > <brett> Many infants get suboptimal social assistance and the
    > outcomes are often dysfunctional people. But they are not
    > dysfunctional by choice.
    >
    > <me> but they're still dysfunctional, at least, according to what
    > currently passes for a majority of society.
    >
    > Yeah. And society pays the price. A more enlightened society might see

    > better economies in avoiding the dysfunctional early socialisation.
    >
    > > So i question ur assumption that everyone
    > > is social.... Its obviously a core belief in ur system, and certes,
    > > generally speaking, it is the case that most people are social.
    >

    <brett> Everyone is social to a degree. Am I really saying that everyone

    is reachable through their residual sociability. I doubt it.
    I think nature throws up dysfunctional types of all forms and
    some genuine sociopaths can probably only be dealt with as amoral
    threats.

    <me> watch some interviews with a sociopath :) they're really quite
    fascinating... anyways, there's no reason to render a sociopath a
    threat;
    in fact, I'ld say they can be harnassed to detect flaws in current
    modes.
     
    > > But not all.
    >
    > <brett> Not all to the same degree. But there is no person alive at
    > present (to the best of my knowledge) with the power to stay alive
    > without cooperating with others.
    >
    > <me> but you acknowledge that it is a possibility?

    <brett> Yes. Imo that is a possibility. For me the interest in morality
    is linked
    to an interest in politics and in the means by which the possibility may

    more become a probability in my life time. Hey I am social, but I am
    also rational, and I am in this for me :-)

    <me> I suggest you pick up a microscope and a centrifuge then :)
     
    > <brett> It is not necessary that social be about niceness
    > it is better, more funcitonal, if it is about an enlightened
    > understanding of frailty and the benefits of cooperation.
    > I would argue that tyrants that aim for the short glorious
    > life of Archilles in 2003 are short changing themselves.
    > They are sub-optimally selfish. With a tweak of their
    > value systems they may be able to satisfy more of their
    > needs and desires by cooperating. But many of them
    > would have to re-learn and I'd expect few of them to
    > change what has worked for them if they could not be
    > presented with a compelling argument. If there is no
    > compelling argument that can be made to their self
    > interest then I would say that no real moral argument
    > is being put to them at all.
    >
    > <me> ....except to say that they have presumeably
    > successfuly satisfied their own utility tables....

    <brett> No they optimised. But I'd argue they have sold themselves
    short.
    They could and may in may cases yet achieve more.

    <me> perhaps, but they certainly won't be listening... if they're
    satisfied,
    why change the status quo?

    ----------------------------------- SNIP, wait for part 2 :) damn this
    is long :) --------------

    > > [brett]
    > > If there was a creature that by nature was not social in any sense I

    > > would grant by my notion of morality that that creature would have
    > > no duties to others and that that creature would not be acting
    > > immorally in anything it did to others. If one is sure that one is
    > > being threatened by a genuine sociopath by my moral reckoning
    > > one would not only be permitted to act in ones defence
    > > one would be morally obliged.
    > >
    > > <me>
    > > see now I wouldn't go that far; just because ur being threatened by
    > > a sociopath does not necessarily mean they will carry out that act;

    > > there's a whole subset of sociopaths that lead "normal" lives
    > > without going through the murder sprees that characterize their (by
    > > our definitions) less-successful brethern. I think thats more of a
    > > policy issue (to be decided upon by each individual)....
    >
    > <brett> That is exactly right. In the end the individual must decide
    > moral policy for themselves. The intelligent individual will take into

    > account existing social mores and laws but in the end they will not
    > shirk the responsibility of the moral decision. They cannot. To shirk
    > is to allow defaults to go into play.
    >
    > <me> yes but ur system implies a judging of that moral code (at the
    > end of the day) by other members of that society... so individual
    > morality is irrelevant if the rest of the group does not consent to
    > that action as being moral...

    No because we *are* social we learn at least some moral
    codes as we becomes socialised. We maybe go through
    something like Kohlbergs levels of moral reasoning.
    We learn terms like utlilitarianism and consequentialism
    and from this social stock of idea on moral codes we
    fashion our own. We don't invent from scratch.
    I'm suggesting we van bet better moral codes into the
    ground water. These won;t remove from individuals the
    need to make moral judgements but they will increase
    the likelihood that reason and enlightenment are brought
    to the process of agreement making and law making.

    > <me>my overall point is that saying you're action
    > (by your own moral standard) is moral is trivially
    > easy to do; convincing
    > others is far more difficult.

    Absolutely. I am in my view doing a pretty poor and
    very longwinded effort at that here. But perhaps in
    working it through like this I will be able to distill it
    into something shorter and more convincing because
    it will appeal to people as a sort of empowering
    knowledge. Like probability theory, or tit for tat.

    >
    > > [brett]
    > >
    > > In practise I would have some residual doubts about
    > > the completeness of the sociopathy of even a creature
    > > such as Hitler so I would not feel completely free to exterminate
    > > him with extreme prejudice unless I had made a good faith reckoning
    > > as to the nature of him as a threat to what I value. Then having
    > > made a best a rational determination of the nature of the threat
    > > as I could given the time and context I would feel free
    > > to exterminate him with exteme prejudice and I
    > > would expect to feel no guilt but only some misgivings
    > > that had I more time I might have judged better. ie.
    > > My concept of morality is I think in that sense
    > > practical. And it is extensible. If others share it,
    > > if they act rationally and in accordance with their selfish
    > > best interests as they perceive it I can (in the context)
    > > of this moral system have not fault them morally.
    > >
    > > <me>
    > > now don't u see a contradiction therein? What if
    > > the sociopath, or even loony person (to broaden the set), is merely
    > > acting to fulfill his own utility (ergo munching on ur spleen or the

    > > like)? I mean, just because someone else is "selfishly" (is their
    > > any other way?!) pursuing there own interests, doesn't necessarily
    > > mean ur own moral code should approve their own...
    >
    > <brett> No my moral code would tell me if this person
    > is reasonable I can point out that their aspiration to munch on my
    > spleen is well recognized by me and that that is not a circumstance
    > that I can permit to prevail. Either we reason out a conclusion
    > together or we fight to the death now. I then invite them to do their

    > calculations of cooperation vs competition and consider how the
    > agreement if it is to be cooperation will be effectively
    > honored. If at any stage I feel that my appeals to their
    > reasoning is hopeless then I fall back on trying to kill
    > them before they kill me.
    >
    > <me> what, you offer them half ur spleen? you're moral
    > code has just failed to offer a suitable compromise to
    > another rational autonomous agent... whereas his
    > morality disregards yours, yours fails to achieve its
    > basic premise... that of being attractive to others
    > rational beings...

    No I offer them the viewpoint that getting my spleen
    will come at great risk to them and provide them
    little nourishment. I suggest that they are better of
    regsrding me as a resource and seeking to cooperate
    with me. I am pretty resourceful and persuasive.
    In many cases I'd expect to pull it off because I would
    really find ways to cooperate. But in some cases
    the universe is a bitch. If the other guy seems me
    as food and I can't persuade him otherwise his
    circumstances and mine may genuinely be that
    desperate. Then, one of us will die and one of us
    will eat.

    >
    > > [Paul]
    > > > Pretty much the only time u can consider something
    > > > moral or immoral is after the event has occurred,
    > > > and then, only for urself. Morality has absolutely
    > > > no import in a pre-emptive doctrine.
    > >
    > > [brett]I don't agree. By my reckoning of morality, when individuals
    > > agree to cooperate with each other for their mutual advantage
    > > (perhaps at some cost to them on other dimensions were they
    > > reckoning their best interests separately) there is a moral bond
    > > between them.
    > >
    > > <me> according to ur definition of morality :)
    >
    > <brett> Yes. According to a system I'm offering up
    > for consideration because I think there is some
    > consistency and utility in it and because if I am right
    > and it is teachable I will benefit by shifting the
    > cooperate (or) compete decision more towards
    > cooperation (just as if I had taught the principle
    > of tit for tat).
    >
    > <me> .. but is it the optimal solution?

    The optimal solution for me, for the other of for
    both of us? It is in my view more likely to be the
    optimal solution in more circumstances because it
    is the more rationally approached solution and
    by being rationally approached we can consider
    each others real needs and measure each others
    real willingness to compromise AND at the
    end of the day we can always fall back on the
    option to compete. Come that unfortunate outcome.
    I would compete *very* hard.

    >
    > > [Paul]
    > > > Anyone that believes to the contrary has not
    > > > rationally examined the situation.
    >
    > <brett> Depends what you mean by belief. Belief
    > is a problem word for me because a lot of people
    > who are doing more than mere believing use the
    > word belief to indicate a sort of less than certain
    > knowledge. The problem is that some people
    > that use the word belief may have done no
    > personal processing on the issue at hand at all but
    > may have simply adopted wholesale something that
    > they were indoctrinated with.
    >
    > <me> the latter part of that paragraph is what
    > I'm implying; that any rational person who has
    > (without resort to emotion, or any rationalization
    > or justification) examined the concept of acting
    > pre-emptively (sans sufficient proof or knowledge
    > of the current environment) is reasoning falsely.

    Pre-emption is slippery. Consider the word
    pretext. Give a politician a pretext and they are no
    longer arguing pre-emption they are arguing reasonable
    and measured response.

    >
    >
    > <brett> If you are implying that my proposed moral
    > system is flawed or inconsistent or unclear then, yes,
    > I am willing to accept that that could be in fact a valid
    > criticism but I'd ask you to point out where because
    > as I've said trying to find means of increasing cooperation
    > and putting morality and ethics on a more rational footing
    > is a worthwhile task.
    >
    > <me> three points really;
    > a) it doesn't make a difference how you arrive at an API,
    > it is only important to establish one (rational or otherwise).
    >
    > There is no reason that an emotional or intuitive based
    > person cannot maximize their own utility while being
    > consistent from an external point of view of their behaviors.
    >
    > Ergo, consistency is key {even more so than cooperation, becaue if you

    > really want somebodies cooperation, you will invariably have that
    > already incorporated in ur utility table [by virtue of it being a
    > necessity to fulfill some other greater utility] and ergo you will
    > have something to offer in trade}.

    I think it does matter that its rational not emotional or faith or
    belief based because reasoning facilitates coomunication and
    understanding between sovereign agents far more
    effectively. Reason has as its tool language. I am not pooh poohing
    emotion. Emotion is what makes life worth living by emotions cannot be
    conveyed in the same way as
    reasons. Nor are they are reviewable and reliable in making judgements.
    In science we do well to acknowledge that out emotions can mislead us. I
    think we do well also to acknowledge this in the formulation of
    contracts and laws.

    >
    > b) it should successfully deal with all types of people; that
    > includes people who want to munch on ur spleen, and people who are
    > complete loners[antisocial], and even those that aren't rational
    > [non-functionally insane]

    It does those who finally want to munch on my spleen I regard as forces
    of nature like sharks or lions. I don't regard their desire to eat me as
    immoral but I definately regard my desire to avoid being eaten as just
    and I feel fully free to exterminate such with extreme prejudice.

    There are no *complete loners* that I am aware of. Yet the time of the
    complete loner is likely to be in the future and these guys cou;ld be
    very dangerous. For now dysfunctional loners that poss a threat, when
    they poss a threat are fair game for
    taking action against.

    There are patterns in most forms of insanity. I'd take my
    understanding of the particular person and their alledged ailment into
    account and go from there. The code doesn't prescibe a
    solution to all it only provides a (better I hope) framework.
    Individual judgements still need to be made.
     
    > c) the API should be predicteable...having a code of laws where
    > no-one can predict whether or not they are in compliance is pointless.

    > It doesn't make a difference whether or not they agree with the
    > fundamental reasonings, they should be able to follow the statements
    > out to their (hopefully logical) consequences or in barring
    > that, have a simple intuitive model to guide them.
    >

    I think my systme is more predictable than others. Those
    thatt present rationally are quickly processes rationally for cooperate
    options and compete threats. Those that don't operate in a rational
    paradign still fall into patterns. Animals are not rational, inanimate
    objects are not rational, if a person behaves in an irrational way they
    can often make themselves grist for my mill or the mill of others that
    do act rationally and with an eye for the political.

    > > [brett]To be frank, I am doubtful that the word belief can be
    > > validly coupled (except as crude linguistic shorthand for "this is
    > > my operating hypothesis") with a rational examination of any
    > > situation. Belief is often used by fairly rational people in just
    > > this short hand manner.
    > >
    > > <me> belief => a statement a rational agent holds true.
    >
    > <brett> Many use it like this. I don't like it because I spend a lot
    > of time considering the politics of language and communication.
    >
    > <me> I don't; I use a word sans negative or positive connotations.
    > Descriptive, accurate and precise. The sentence is where I pass
    > judgement on the thought, not the word.

    I accept that that is often true.

    >
    > <brett> I think that if an extrope is debating with a flat earther in

    > front of an open minded audience and the audience is only partly
    > paying attention and they hear the extrope talking of beliefs on the
    > one hand and the flat earther talking of beliefs on the other the
    > audience may be seduced into thinking one belief may be just as good
    > as the other. I think it is in our interests to get some probability
    > and quantifiability into the discussion. Belief is language which
    > serves the preservation of the status quo.
    >
    > <me> I agree; if you are ambigious on how I am using a word, just ask
    > for clarification.. vice versa on my end, assumed, of course.
    >
    You miss the political point. The audience is the voting public. It is
    up to extropians to convince them of our relatively non-conservative
    agendas or tohave to wear the policies that are put in place.

    It is for us to be smart in our communications or wear the consequences
    ofnot being. When we use the word belief we weaken our cases where they
    should be strongest -we usually have reasoned and we unlike our
    opponents (should be) open to the superior
    argument.

    >
    > > [Brett] By the code of morality I have tried to
    > > describe, belief qua belief is immoral. This is because when one is
    > > believing one is not reasoning and when one is not reasoning to the
    > > route of ones selfish best interest one is groping with a less than

    > > optimal method.
    > >
    > > <me>
    > > depends on how u define it :)
    > >
    > > Yes. And I don't think extropes generally define it as
    > > I do, but my point is that people who hear belief being used may be
    > > people we are trying to persuade and it behooves us to use the most
    > > persuasive language.
    >
    > <me> disagree. more important than persuading other
    > people is establishing a way of communicating clearly
    > and unambigiously, preferably with some training
    > into how to accurately convey your thought processes
    > to another, and how to detect (and request clarification) when others
    > are using a word to convey a different semantic construct.

    Often the extropic message is a most persuasiveluy put
    when it is put in terms of clear and unambiguous
    communication.

    I have personally seen huge decisons on national policy "justified" by
    political leaders not on the basis of the evidence but on the basis of
    belief.

    > Ergo I never argue persuasively, only defensively :)
    > Lawyers argue persuasively, and just about everybody
    > I know (excluding lawyers) hates dealing with them
    > [ergo a necessary evil]....

    Very much so. Fact is the legislation that exists in our countries is
    established in a particular way. There is no point wishing it were
    otherwise, it is as it is. Therefore I'd rather be an effective lobbyist
    for the policies and laws I want to see enacted (or not enacted) then
    not be.

    >
    > > Belief is a poor word for conveying significant amounts
    > > of intellectual exercise.
    > >
    > > And certes, just because
    > > you believe something doesn't necessarily make it false
    > > (or ill-advised); classic example, I believe the sun will rise
    > > tomorrow morning... Of course the veracity of that statement will
    > > require observation tomorrow morning; but the belief is both
    advisable
    >
    > > and statistically speaking, fairly certain... In other words,
    > > belief
    > > and logic are not necessarily at odds; it depends on how you define
    > > it.
    >
    > <brett> My point is that when you speak in a political forum you
    > do your own thought process which is based on considerably
    > more than mere indoctrination by another (I hope) a disservice
    > when you use the word belief instead of another word.
    >
    > <me> I see; you're making a broader point than are discussion :)

    Yes. Sorry I get evangelical sometimes.

    >
    > <brett>(This is because not everyone who hears you use the word
    > belief knows that you will have done more processing.
    > It seems to me that many extropes fail to realise that the audience,
    > the rest of the world doesn't give away free credibility points
    > for one wacky belief over another.
    >
    > <me> for me, personally, in day-to-day interactions, people always
    > quickly realize that any thought I express has been well-thought out.
    > I have not, of course, attempted to communicate in a traditional
    > public forum {press, tv, etc} other than internet-based forums
    > (list-servs and usenet). Incidentally, mostly because I'm not really
    > trying to persuade anybody of anything outside of my scope of
    > interaction. I'm primarily interested in getting my viewpoints out,
    > to confirm whether or not there are any flaws in my thinking that I
    > have missed, or to flesh out an idea by gathering other people's
    > inputs on the subject, generally through anecdotal experiences... The
    > scientifically-based stuff I get through journals and books.

    Perhaps you are still relatively young and have yet to grow your
    political teeth. This is fair enough. If you come to see the connection
    between the stuff we aspire to and the legislations that goes through
    nation
    parliament on stem cells and nanotechnology and intellectual property
    you may see things differently. I do not mean to be patronising.
    Actually I'd like to be empowering. Many young extropes could be potent
    political forces for change in their own individual right if only they
    perceived the need and put some time into acquiring the skills.

    > > [brett]My contention is that as soon as one becomes
    > > a "believer" one has ceased to hold to the principle
    > > of to thine own self be true - unless one is incapable
    > > of reasoning - (or one must reach a tentative
    > > conclusion based on the imperative to live and
    > > act in real time).
    > >
    > > <me> hahahaha :) re - ur last qualification :)
    > > well since we're all stuck in this current universe... :)
    >
    > <brett> Yes, but again I'd go back to pan critical rationalism.
    >
    > <brett> Without ever getting absolute certainty there are techniques
    > which we can learn which give us a much higher probability of
    > getting a correct (a useful) answer.
    >
    > <me> until you discover that extreme you didn't consider.. I'm an
    > engineer (mentality-wise), so for the most part, I always have to
    > build/plan for the worst case scenario.. Theoretically, that means I
    > have a smaller margin for error before I'm willing to sweep it under
    > the rug as not worth planning for.

    Soory don't follow. Sounds right but don;t see the
    relevance.

    >
    > > > Generally speaking, I have no use for morality;
    > > > just ethics [standard api, consistently adhered to, logically
    > > > derived, based on reality]....
    > >
    > > [brett]I'm reading api as 'application programming interface'.
    > >
    > > <me> yuppers.
    > >
    > > [brett]"Generally speaking" I suspect you are unlikely to enjoy
    > > discussing morality and/or ethics much further with me ;-)
    > >
    > > <me> it doesn't really bother me, if thats what u're asking :)
    >
    > <brett> I was asking. I don't enjoy boring people, I just risk it ;-)
    >
    > <me> thats fair :) generally the conversation ends up dying out when
    > nobody bothers responding :)

    I expect that will be after this post and thats fair enough. It has been
    good to try and write down some stuff and try and get some
    ideas straight. Or straighter.
     
    > > but I've pretty much made up my ethical system, at least
    > > in terms of the larger ruleset (meta-rules)...
    > > some of the smaller "behaviors" are data-driven
    > > (tit-for-tat, etc) :)
    >
    > <brett> As indeed in practice most of us have. If I am right and a
    > better more universal ethical system can be derived I would expect
    > that in most peoples cases there would be very little observable
    > differences in how they'd behave. But then on the other hand when one
    > starts to routinely reason as oppose to believing one is in a
    > position to converse mind to mind with other reasoning beings.
    > Beliefs can very easily become entrenched positions.
    > I think to reason when reason is available is more
    > social and because I think humans are social (their
    > interests are best served by cooperation) to reason is
    > moral to believe is not.
    >
    > <me> I agree with everything but ur last statement :)
    > as I said, give me an simple, robust API anyday.
    > It doesn't matter to me if there is a one-to-one mapping
    > between it and some derived, generic ethical system,
    > or there is a many-to-one mapping. I generally prefer
    > rationally based systems in that the API happens
    > to conform to reality [generally the other requirement,
    > don't want to end up getting killed or maimed for my API]...

    To use you terminology I'm more concerned with the
    consequences of trying to push forward with a suboptimal
    API.

    > I dunno, overall, I have some fairly big problems with
    > your API.

    I can tell. But thanks for persisting its helped me clarify
    my thoughts.

    >
    > I think more than anything else though, its that social
    > requirment thing... :) Then again, I've been described
    > by several people in my life as a human computer...

    :-) I agree the social bit is the weaker bit.

    Regards,
    Brett

    {PS: No reply expected - frankly it would scare
    me to have to revisit this thread at this length again
    soon - I need to break it out, seriously edit and
    see what happens next}



    This archive was generated by hypermail 2.1.5 : Thu Aug 07 2003 - 08:55:51 MDT