RE: To thine ownself be true?

From: Paul Grant (shade999@optonline.net)
Date: Tue Aug 05 2003 - 00:54:57 MDT

  • Next message: Paul Grant: "RE: Genocide sucks"

    From: owner-extropians@extropy.org [mailto:owner-extropians@extropy.org]
    On Behalf Of Brett Paatsch
    Sent: Monday, August 04, 2003 2:51 PM
    To: extropians@extropy.org
    Subject: Re: To thine ownself be true?

    Paul Grant writes:
    > .. in my opinion, anybody who says the statement that
    > they are acting morally in a pre-emptive fashion that
    > harms others, regardless of the effect, is immoral;
    > insofar as no event has occurred to justify said "response".

    <brett> I cannot agree with this statement as a generalisation, as there

    are some classes of pre-emptive action made on
    the basis of genuinely held, earnestly reasoned (note I
    am not touching *belief* here) views that would require
    action in my view.

    <me> trying to justify a pre-emptive measure on the notion
    that is "genuinely held" or "earnestly reasoned" is a rationalization
    in my opinion, generally to excuse the type of behavior you
    are engaging in... the limit on this line of reasoning though,
    is in the duration of the act.... for instance, say you were prescient,
    and saw a man who was going to mug you (with a knife) 10 minutes
    from now, and hit him over a head; than you would be acting morally
    (given ur prescience). Lets say you are not prescient, and you hit
    him over the head on the possibility that he might mug you; than you
    are acting immorally. Lets say you are not prescient, and he is mugging
    someone else (as it is apparent to you from your vantage point), and you
    intervene by hitting him over the head... Then you're actions may or may
    not be immoral, on the basis that he may not be the one doing the
    mugging,
    but rather, may be the muggee. The point being that you have to
    consider
    the granularity of the event, the knowledge you had as an autonomous
    agent,
    the environment you're in, and the action chosen, and the outcome of
    that action...
    All those are invariably what people choose to judge you over [from a
    historical
    perspective].... and since I am not particularly concerned about
    morality within
    a persons own framework, but rather, a cooperative sort of morality that
    effectively
    incorporates even the most base of people intuitions, that is the scope
    my statement
    is meant to be evaluated in. If you can find a *single* (reasonable,
    defined as a person
    who's views of reality are generally accurate) person (within your
    collective)
    who judges your actions immoral on the basis of your premptive action,
    than you
    have violated the precepts of a good moral system. Ergo don't go around

    bonking people on the head until you are sure of the situation you are
    in,
    and then react only as needed. Incidentally, this is the equivalent to
    a
    code of laws system.

    <brett> Indeed sometimes there would be
    a moral impetus towards action (even pre-emptive
    action). We cannot *know* for certain the actions of
    others and we cannot always prudently wait for them
    to do their worst before we respond. In the end it is
    to our own judgement of the total risks that we must
    hold true and then on that judgement we must act
    (including in *some* cases pre-emptively).

    <me>I'll agree that you can *feel* you're actions are morally motivated.
    I would say however, that if you were to examine you're actions
    rationally,
    you will always find a probability or pathway (excluding a continious
    action like a knife stabbing towards you, ergo a certain level of
    granularity)
    that will lead to your "preemptive action" being immoral. Of course you
    could always say (arbitrarily) that I was reacting to the best of my
    abilities
    to the best of my knowledge ergo my action was moral by my system of
    morals/ethics.... But I tend to think of that as a cop-out.

    > In relation to your secondary point (stated in this letter); I really
    > don't think morality has anything necessarily to do with
    > self-delusion, or the acknowledgement thereof. Or rather, there is no
    > truth that states necessarily you have to be honest, ergo an act of
    > dishonesty (as it relates to self-delusion) does not violate any
    > particularly great truth.

    <brett> First the status of morality and the rationality of ethics is
    pretty widely regarded at least so far as I am aware in
    philosophical circles as being almost a matter of opinion.
    (eg. Bertrand Russell. History of Western Philosophy).

    <me> I'm sure it is; until you run into an event that requires
    faith or belief outside of rationality. Ergo if I'm marooned
    on a desert island for 60 years, does it really make a damned
    difference if I hallucinate marilyn monroe on the island with me
    in order to remain sane? I tend towards a function and
    dysfunctional definition of sanity; if its dysfunctional, then you
    are not being ethical. if its functional, you are being ethical.
    and since functionality is related to the environment you are
    operating in, ergo my comment about self-delusion not really
    having anything to do with morality... I definately think everyone
    is engaged in it to some degree (self-delusion), and to the extent
    that it helps you, its ethical....

    <brett> I find this conclusion powerful, dangerous and deeply
    unsatisfying so I am keen to have at it. Let's be clear
    I may fail but if I'm going down like Don Quixote,
    then I'm going down "having at" one of the larger
    uglier philosophical windmills.

    <me> I was just telling my little sister yesterday about one
    of the classical issues in my life (at one point during
    my younger years); at what point does being too intelligent
    start to harm you (the classic form being, if you could trade
    intelligence for gauranteed happiness, would you do it)...
    most intelligent people say no; I think the really intelligent
    people though, when they consider it, say yes. This is of
    course, assumes that people intuitively seek to maximize their
    utilities, and said maximazation of utility defines a state
    of happiness [which is, I think, reasonable]... Ergo self-delusion
    is only dangerous if you can't discard the delusion when
    it becomes detrimental to your pursuit of happiness. This
    is the hallmark of a creative intellect, insofar as it requires
    both the vision to see the possibilities, and the ability to maintain
    seperate versions independant of each other {and process
    them for fitness subconsciously}.

    <brett> Ok. Now here's my point. Unless a moral code can
    arise form a set of universals such as a propensity to
    reason and a predispositon to sociability then there is
    not likely to be much genuine agreement between
    subjective individuals on moral codes

    <me>Oh I'll *agree* that it is absolutely necessary
    to have something to that effect; I think the inclusion of
    a predisposition to sociability is where your moral system
    will fail, as in a fair amount of people are not (at some point
    in their lives) sociable.... Any moral system you build on that
    premise is doomed to fail because it does not take into account
    actions by that subpopulation of people (antisocial individuals
    who are operating on a different ethical system). I would
    state that ur assumption that there is a propensity to
    reason is a reasonable one in that it is necessary for
    the ability to recognize other autonomous agents actions
    for what they are; expressions of their own moral/ethical systems..

    <brett> Further those
    who do not know endeavour to understand themselves,
    what manner of creature they are, are not going to be
    in a position to know what the most optimal compromises
    for them are when compromises need to be made.

    <me> I've met several people who are extremely intuitively,
    but unable to verbalize (form coherent sentences) expressing
    their viewpoints...they just know what is right for them, and
    what is not... how does your system encompass them?

    <brett> A person
    that deludes themselves that they are a different sort of
    creature with different sets of drivers and needs than
    they actually have is precluded from sitting down at table
    to negotiate for their own best interests because they do not
    know their own best interests. A person that deludes themselves
    willingly can hardly be a person that others would want to
    engage in a moral compacts with.

    <me> according to you :) to me its fine :) In fact, i rather
    like space cadets :) They tend to have a rather fascinating
    imagination, and are very suggestible...

    <me> I should give you a friendly warning at this point;
    I am generally where people's morals systems (day-2-day)
    break; they invariable adopt (when dealing with me) mine
    as it is impossible to force me to adopt theirs, and generally
    speaking, I am a fairly productive person (and very difficult
    to avoid if your moral system ends up conflicting with mine).
    I say this of course, in a friendly manner, because it will
    give you an insight to my fundamental objection to your
    system....that it, it is not sufficiently broad enough to
    encompass all useful people (non-functional insanity excluded).

    <brett> Within the context of the time available I think
    individuals ought to use pan critical rationalism in
    their approach to understanding their own nature and
    then pursuing the optimal outcome for them in accordance
    with their nature.

    <me>....which may end up not being rational (to outside
    examination)....

    <brett> This optimal outcome will necessarily
    involve compromise because others whose cooperation
    is needed will not if they are enlightened and empowered
    (and likely to be good allies as a consequence) strike
    bad bargains.

    <me> I would agree with this sentence :) compromise
    is often called for amongst people if they hope to culture
    friendships that will be substantial in trade.
     
    > <me>
    > Or to put it more bluntly, sometimes self-delusion
    > is the ticket :) Ever wonder why (evolutionary-speaking)
    > we have emotions?
    >

    <brett> I reckon to the extent one is self deluded one will be
    screwed at the negotiating table because perceptive others
    will recognize you as having a lower sentience quotient and
    the only one to blame for this if you are self deluding will
    be yourself.

    <me> depends on how intelligent they are. some forms of
    self-delusion are bad, some are good. some are bad generally,
    but useful in achieving short-term goals...

    <me> fundamentally though, I think you're exclusion of
    people who are self-deluded though, would end up eliminating
    all of humanity... and ergo ur proposed moral system would
    fail in that it would not encompass anyone... and if there
    was a completely rational person (at some point), than they
    would probably end up suicidal.. Of course this is conjecture
    on my part (assuming everyone is self-delusional to a degree)
    but I think it is bourne out by the nature of being human.
     
    > [brett]
    > Now against this point it might be argued that there
    > are no circumstances where dishonesty with oneself is
    > a moral matter. I conceed that this is the traditional view but my
    > contention is that that traditional view is wrong, flawed, and lacking

    > in utility.
    >
    > <me>
    > Fair enough, I'll bite :)
    >
    > [brett]
    > I am arguing that only those that can commit themselves
    > to hold themselves to a rational moral code are in a
    > position to have the sort of maturity that is required to
    > forge the sort of compacts that will best serve the
    > strongest forms of cooperatives and the most extropic
    > societies.
    >
    > <me> substitute "ethics" in for "morality" and I'd agree;

    <brett> So far I've been using morality and ethics without
    distinguishing them particularly.

    <me> yeah I noticed :) I was trying to disinguish the two
    for the purposes of this dialogue :)

    > morality to be is something generally provided for people
    > by external sources (rather than derived by said people);
    > it also deals heavily with intention. Now, I *really* don't care what

    > people's intentions are, just their actions.

    <brett> Of course you do. A person that you know was intending
    to steal from you yesterday but did not for lack of an
    opportunity is likely to be filed as such and regarded as
    such. If not you'd be suckered far more often than the
    norm.

    <me> "an honest man is just a thief without opportunity" :)

    > Intention are more of a heuristic to decide whether or
    > not someone's future actions will be favorable.

    <brett> They certainly are that. And a persons future actions
    , their motives, their reputation are surely part of practical
    moral deliberations on your part.

    <me> nope. I deal on a tit-for-tat basis. Either people adhere
    to a basic set of standards, or they disappear from any sort of
    significant interactions (if I itch, I scratch). There's generally
    a ramp-up period of re-training. They are free to interact with
    others, however they wish. I do not interfere unless called upon
    by another (involved) party. I am *always* fair.

    > This discussion could all
    > be simplified by people adopting a non-intention based
    > system, where people are judged by their actions, and
    > statements made by said people are evaluated in that
    > context (as actions).

    <brett> I am trying to establish the validity of the statement to
    thine ownself be true. You, I think, are trying to make me
    make the case or to refute it. I am not interested in a simple moral
    system
    I am interested in a rational, teachable and extensible one.

    <me> Its rational, teachable and extensible. In fact, its more
    teachable
    than a complicated one.... There are no gray areas.

    <brett>If I can teach a moral system that has high
    utility to those I teach it too, like teaching the principle of tit
    for tat, then I can be confident that the world will be that much more
    pleasant and predictable for me as a result.

    <me> evaluate my suggestion within the context of whether or not such
    a system would be fit ur standards and offer high utility... I would
    suggest, humbly, that it does...
     
    > [brett]
    > I do not imagine that any of us ultimately succeeds in avoiding self
    > delusion. But if the charge of hyper-rationality is ever a valid
    > criticism I do not think it can be so on matters of morality where the

    > individuals concerned acknowledge that their take on the universe is
    > inherently subjective and inherently selfish.
    >
    > <me>
    > I think there are degrees of self-delusion; I think
    > more important than self-delusion is the end effect that
    > self-delusion has on the person as a total system.

    <brett> In almost all negotiations, and most moral matters
    between persons involve a quid quo pro, the best possible
    outcome for the individuals involved depends on them
    recognizing the best possible outcome for them (without
    their being deluded as to what they want or need) and
    then most closely approximating it.

    <me> I think perhaps, we mean two things by being deluded..
    You are referring to a final state of utility reached
    by a rational deduction of what your particular utility table
    is... I am referring to it as a person who has already determined
    their utility table, but reached it through a non-rational pathway.
    Ergo for you, someone who is self-deluding themselves is incapable
    of understanding what their true utility tables are... whereas for me,
    a person who is self-deluding themselves is a person who is mapping
    their stimuli (internal & external) to alter an unpleasant situation to
    a state of utility...where said mapping may not be based either on
    a rational inference chain (thinking), or even by rational examination
    of ones self... What do you think? Can you clarify exactly what you
    mean by self-delusion?
     
    > [brett]
    > It is my contention that if we cannot find a harmony of
    > selfish interests we will not find anything but the illusion
    > of harmony at all.
    >
    > <me> in other words, someplace where everyones needs
    > are met...

    <brett> Absolutely not. That is pure fantasy land. The best we can
    hope to achieve is a reasonable compromise where all sit
    down in good faith and recognizing that all are compromising
    but all are gaining in the aggregate all act in accordance with
    the agreement.

    <me> in other words, someplace where everyones needs
    are met...to the best of the collectives ability?
     
    > [brett]
    > And in order for their to be a harmony of selfish interests
    > their must be real recognition of the nature of oneself
    > and ones needs.
    >
    > <me>
    > or a mapping of sensory input to available physical stimulus.
    > thats another possibility, sans recognition of one's own selfish
    > interests.

    <brett> I wouldn't limit one's self understanding and ones understanding
     of ones needs and desires to the mere physical.

    <me> I certainly wouldn't extend it to the metaphysical :) Brain
    processing
    is a physical process. Its influenced directly by physical processes..
    Including meta-thoughts (working on ur concepts directly with a higher
    level grammer).

    <brett> I think there are social needs in most people that are very deep
    seeded
    and go beyond the need for physical contact.

    <me> I wasn't referring to that :) I was referring to altering the
    reality you're
    living in (cognitively) by biofeedback, but on an subconscious level.
    Something
    akin to the Siberian train conductors lowering their metabolism, based
    on when
    the train was going to pull into the station [arrive].. but more
    sophisticated..

    > Same goes for empaths (people with highly developed
    > abilities to sense what others are feeling off of physical
    > [body, speech] cues). They intuitively understand people
    > and respond sans a specific rational understanding of those
    > people. There's no reason (any empaths out there?) to
    > think that emotion-intuition is not being applied to
    > themselves (the equivalent of reflection).

    <brett> I am not sure what your point is here. Sociopaths are also
    good readers of patterns. They just don't empathise. But
    I would argue that sociopathy is dysfunction.

    <me> Depends on whether or not sociopaths can manage to
    contain their more destructive impulses (or rather, get away
    with it). There's a subpopulation of sociopaths who are
    quite successful, who manage to sustain their needs by
    twisting the letter of the law to suit their need. Its not
    necessarily a dysfunction. + on top of it, it depends on whether
    or not a sociopath is bothered by prison. Who knows?

    <brett> And real sociopaths would possible grin at me
    and say who needs that sentimental bullshit anyway.
    And I'd say you do here's why ..etc.

    <me> I think I can say very fairly that sociopaths do not
    think like normal people; especially (or rather, primarily)
    when it comes to their utility tables... There's no basis
    for comparison :) Ergo why I brought them into the
    conversation...
     
    > [Brett]
    > This is where I think it becomes important
    > to acknowledge to oneself that one can be rational and
    > that one is by nature social. If one does not acknowledge
    > that one is social one is not (by my reckoning) being true
    > to oneself and one does not have the sort of maturity
    > that will enable one to be on good terms with oneself
    > and to form real compacts that have a chance of being
    > honored with others.
    >
    > <me>
    > Ooooh I don't know about that :) You seem to take
    > that people are by nature, social creatures. I don't necessarily
    > think thats the case. Or to qualify, people are social by a matter of
    > degree.

    <brett> Sure but there is a baseline.

    <me> go tell that to the unibomber.

    > Some are quite capable
    > of going it alone while others would die if seperated
    > from the herd.

    <brett> Only after some initial basic social assistance has been
    rendered.

    <me> We're dealing with [dynamic] adults here, no? You don't intend
    to limit your morality to only those who are currently social?

    <brett> Many infants get suboptimal social assistance and the
    outcomes are often dysfunctional people. But they are not
    dysfunctional by choice.

    <me> but they're still dysfunctional, at least, according to what
    currently
    passes for a majority of society.

    > So i question ur assumption that everyone
    > is social.... Its obviously a core belief in ur system, and certes,
    > generally speaking, it is the case that most people are social.

    <brett> Belief has nothing to do with it. I have learned and observed
    human infants I know that physiologically they cannot survive without
    assistance - that they wish to survive - that they suckle if they can
    and cry if they can't is not a matter of mere belief.

    <me> I'll agree with ur assessment on babies... now whats the relevance
    to your moral system?

    > But not all.

    <brett> Not all to the same degree. But there is no person alive at
    present
    (to the best of my knowledge) with the power to stay alive without
    cooperating with others.

    <me> but you acknowledge that it is a possibility?

    <brett> It is not necessary that social be about niceness it is better,
    more funcitonal, if it is about an enlightened understanding of frailty
    and the benefits of cooperation. I would argue that tyrants that aim for
    the short glorious life of Archilles in 2003 are short changing
    themselves. They are sub-optimally selfish. With a tweak of their value
    systems they may be able to satisfy more of their needs and desires by
    cooperating. But many of them would have to re-learn and I'd expect few
    of them to change what has worked for them if they could not be
    presented with a compelling argument. If there is no compelling argument
    that can be made to their self interest then I would say that no real
    moral argument is being put to them at all.

    <me> ....except to say that they have presumeably successfuly satisfied
    their own utility tables....

    > [brett]
    > If there was a creature that by nature was not social in
    > any sense I would grant by my notion of morality that
    > that creature would have no duties to others and that
    > that creature would not be acting immorally in anything
    > it did to others. If one is sure that one is being
    > threatened by a genuine sociopath by my moral reckoning
    > one would not only be permitted to act in ones defence
    > one would be morally obliged.
    >
    > <me>
    > see now I wouldn't go that far; just because ur being
    > threatened by a sociopath does not necessarily mean they
    > will carry out that act; there's a whole subset of sociopaths
    > that lead "normal" lives without going through the murder
    > sprees that characterize their (by our definitions)
    > less-successful brethern. I think thats more of a policy issue
    > (to be decided upon by each individual)....

    <brett> That is exactly right. In the end the individual must decide
    moral policy for themselves. The intelligent individual will take into
    account existing social mores and laws but in the end they
    will not shirk the responsibility of the moral decision. They cannot.
    To shirk is the allow defaults to go into play.

    <me> yes but ur system implies a judging of that moral code
    (at the end of the day) by other members of that society...
    so individual morality is irrelevant if the rest of the group
    does not consent to that action as being moral...

    <me>my overall point is that saying you're action (by your own
    moral standard) is moral is trivially easy to do; convincing
    others is far more difficult.
     
    > [brett]
    >
    > In practise I would have some residual doubts about
    > the completeness of the sociopathy of even a creature
    > such as Hitler so I would not feel completely free to
    > exterminate him with extreme prejudice unless I had
    > made a good faith reckoning as to the nature of him
    > as a threat to what I value. Then having made a
    > best a rational determination of the nature of the threat
    > as I could given the time and context I would feel free
    > to exterminate him with exteme prejudice and I
    > would expect to feel no guilt but only some misgivings
    > that had I more time I might have judged better. ie.
    > My concept of morality is I think in that sense
    > practical. And it is extensible. If others share it,
    > if they act rationally and in accordance with their selfish
    > best interests as they perceive it I can (in the context)
    > of this moral system have not fault them morally.
    >
    > <me>
    > now don't u see a contradiction therein? What if
    > the sociopath, or even loony person (to broaden the set),
    > is merely acting to fulfill his own utility (ergo munching
    > on ur spleen or the like)? I mean, just because someone
    > else is "selfishly" (is their any other way?!) pursuing
    > there own interests, doesn't necessarily mean ur own
    > moral code should approve their own...

    <brett> No my moral code would tell me if this person is reasonable
    I can point out that their aspiration to munch on my spleen
    is well recognized by me and that that is not a circumstance
    that I can permit to prevail. Either we reason out a conclusion
    together or we fight to the death now. I then invite them to
    do their calculations of cooperation vs competition and
    consider how the agreement if it is to be cooperation will
    be effectively honored. If at any stage I feel that my appeals
    to their reasoning is hopeless then I fall back on trying to kill
    them before they kill me.

    <me> what, you offer them half ur spleen? you're moral code has
    just failed to offer a suitable compromise to another rational
    autonomous agent... whereas his morality disregards yours, yours
    fails to achieve its basic premise... that of being attractive to
    others rational beings...
     
    > [Paul]
    > > Pretty much the only time u can consider something
    > > moral or immoral is after the event has occurred, and
    > > then, only for urself. Morality has absolutely no import in a
    > > pre-emptive doctrine.
    >
    > [brett]I don't agree. By my reckoning of morality, when
    > individuals agree to cooperate with each other for their
    > mutual advantage (perhaps at some cost to them on
    > other dimensions were they reckoning their best
    > interests separately) there is a moral bond between
    > them.
    >
    > <me> according to ur definition of morality :)

    <brett> Yes. According to a system I'm offering up
    for consideration because I think there is some consistency
    and utility in it and because if I am right and it is teachable
    I will benefit by shifting the cooperate compete decision
    more towards cooperation (just as if I had taught the principle
    of tit for tat).

    <me> .. but is it the optimal solution?
     
    > [Paul]
    > > Anyone that believes to the contrary has not
    > > rationally examined the situation.

    <brett> Depends what you mean by belief. Belief is a problem word
    for me because a lot of people who are doing more than
    mere believing use the word belief to indicate a sort of
    less than certain knowledge. The problem is that some people
    that use the word belief may have done no personal processing
    on the issue at hand at all but may have simply adopted
    wholesale something that they were indoctrinated with.

    <me> the latter part of that paragraph is what I'm implying;
    that any rational person who has (without resort to emotion,
    or any rationalization or justification) examined the
    concept of acting pre-emptively (sans sufficient proof or
    knowledge of the current environment) is reasoning falsely.
     

    <brett> If you are implying that my proposed moral system is flawed
    or inconsistent or unclear then, yes, I am willing to accept that that
    could be in fact a valid criticism but I'd ask you to point out where
    because as I've said trying to find means of increasing cooperation
    and putting morality and ethics on a more rational footing is a
    worthwhile task.

    <me> three points really;
    a) it doesn't make a difference how you arrive at an API, it is only
    important to establish one (rational or otherwise). There is no reason
    that an emotional or intuitive based person cannot maximize their own
    utility while being consistent from an external point of view of their
    behaviors.
    Ergo, consistency is key {even more so than cooperation, becaue if you
    really want somebodies cooperation, you will invariably have that
    already
    incorporated in ur utility table [by virtue of it being a necessity to
    fulfill
    some other greater utility] and ergo you will have something to offer in
    trade}.

    b) it should successfully deal with all types of people; that includes
    people
    who want to munch on ur spleen, and people who are complete
    loners[antisocial],
    and even those that aren't rational [non-functionally insane]

    c) the API should be predicteable...having a code of laws where no-one
    can
    predict whether or not they are in compliance is pointless. It doesn't
    make
    a difference whether or not they agree with the fundamental reasonings,
    they should be able to follow the statements out to their (hopefully
    logical)
    consequences or in barring that, have a simple intuitive model to guide
    them.

    > [brett]To be frank, I am doubtful that the word belief can
    > be validly coupled (except as crude linguistic
    > shorthand for "this is my operating hypothesis") with
    > a rational examination of any situation. Belief is often
    > used by fairly rational people in just this short hand
    > manner.
    >
    > <me> belief => a statement a rational agent holds true.

    <brett> Many use it like this. I don't like it because I spend a lot
    of time considering the politics of language and communication.

    <me> I don't; I use a word sans negative or positive connotations.
    Descriptive, accurate and precise. The sentence is where I pass
    judgement on the thought, not the word.

    <brett> I think that if an extrope is debating with a flat earther in
     front of an open minded audience and the audience is only partly
    paying attention and they hear the extrope talking of beliefs on the
    one hand and the flat earther talking of beliefs on the other the
    audience may be seduced into thinking one belief may be just as
    good as the other. I think it is in our interests to get some
    probability
    and quantifiability into the discussion. Belief is language which
    serves the preservation of the status quo.

    <me> I agree; if you are ambigious on how I am using a word,
    just ask for clarification.. vice versa on my end, assumed, of course.

     
    > [Brett] By the code of morality I have tried to
    > describe, belief qua belief is immoral. This is because
    > when one is believing one is not reasoning and when
    > one is not reasoning to the route of ones selfish best interest one is

    > groping with a less than optimal method.
    >
    > <me>
    > depends on how u define it :)
    >
    > Yes. And I don't think extropes generally define it as
    > I do, but my point is that people who hear belief being
    > used may be people we are trying to persuade and it
    > behooves us to use the most persuasive language.

    <me> disagree. more important than persuading other
    people is establishing a way of communicating clearly
    and unambigiously, preferably with some training
    into how to accurately convey your thought processes
    to another, and how to detect (and request clarification)
    when others are using a word to convey a different
    semantic construct. Ergo I never argue persuasively,
    only defensively :) Lawyers argue persuasively, and
    just about everybody I know (excluding lawyers) hates
    dealing with them [ergo a necessary evil]....

    > Belief is a poor word for conveying significant amounts
    > of intellectual exercise.
    >
    > And certes, just because
    > you believe something doesn't necessarily make it false
    > (or ill-advised); classic example, I believe the sun will rise
    > tomorrow morning... Of course the veracity of that statement will
    > require observation tomorrow morning; but the belief is both advisable

    > and statistically speaking, fairly certain... In other words, belief
    > and logic are not necessarily at odds; it depends on how you define
    > it.

    <brett> My point is that when you speak in a political forum you
    do your own thought process which is based on considerably
    more than mere indoctrination by another (I hope) a disservice
    when you use the word belief instead of another word.

    <me> I see; you're making a broader point than are discussion :)

    <brett>(This is because not everyone who hears you use the word
    belief knows that you will have done more processing.
    It seems to me that many extropes fail to realise that the audience,
    the rest of the world doesn't give away free credibility points
    for one wacky belief over another.

    <me> for me, personally, in day-to-day interactions, people always
    quickly realize that any thought I express has been well-thought out.
    I have not, of course, attempted to communicate in a traditional public
    forum {press, tv, etc} other than internet-based forums (list-servs and
    usenet). Incidentally, mostly because I'm not really trying to persuade

    anybody of anything outside of my scope of interaction. I'm primarily
    interested in getting my viewpoints out, to confirm whether or not there
    are any flaws in my thinking that I have missed, or to flesh out an idea
    by gathering other people's inputs on the subject, generally through
    anecdotal experiences... The scientifically-based stuff I get through
    journals and books.
     
    > [brett]My contention is that as soon as one becomes
    > a "believer" one has ceased to hold to the principle
    > of to thine own self be true - unless one is incapable
    > of reasoning - (or one must reach a tentative
    > conclusion based on the imperative to live and
    > act in real time).
    >
    > <me> hahahaha :) re - ur last qualification :)
    > well since we're all stuck in this current universe... :)

    <brett> Yes, but again I'd go back to pan critical rationalism.

    <brett> Without ever getting absolute certainty there are techniques
    which we can learn which give us a much higher probability of
    getting a correct (a useful) answer.

    <me> until you discover that extreme you didn't consider..
    I'm an engineer (mentality-wise), so for the most part, I
    always have to build/plan for the worst case scenario..
    Theoretically, that means I have a smaller margin for error
    before I'm willing to sweep it under the rug as not worth
    planning for.
     
    > > Generally speaking, I have no use for morality;
    > > just ethics [standard api, consistently adhered to,
    > > logically derived, based on reality]....
    >
    > [brett]I'm reading api as 'application programming interface'.
    >
    > <me> yuppers.
    >
    > [brett]"Generally speaking" I suspect you are unlikely to
    > enjoy discussing morality and/or ethics much further
    > with me ;-)
    >
    > <me> it doesn't really bother me, if thats what u're asking :)

    <brett> I was asking. I don't enjoy boring people, I just risk it ;-)

    <me> thats fair :) generally the conversation ends up dying out when
    nobody bothers responding :)

    > but I've pretty much made up my ethical system, at least
    > in terms of the larger ruleset (meta-rules)...
    > some of the smaller "behaviors" are data-driven
    > (tit-for-tat, etc) :)

    <brett> As indeed in practice most of us have. If I am right and a
    better more universal ethical system can be derived I would expect
    that in most peoples cases there would be very little observable
    differences in how they'd behave. But then on the other hand
    when one starts to routinely reason as oppose to believing one is
    in a position to converse mind to mind with other reasoning beings.
    Beliefs can very easily become entrenched positions.
    I think to reason when reason is available is more social
    and because I think humans are social (their interests are best
    served by cooperation) to reason is moral to believe is not.

    <me> I agree with everything but ur last statement :) as I said,
    give me an simple, robust API anyday. It doesn't matter to me
    if there is a one-to-one mapping between it and some derived,
    generic ethical system, or there is a many-to-one mapping. I
    generally prefer rationally based systems in that the API happens
    to conform to reality [generally the other requirement, don't
    want to end up getting killed or maimed for my API]...

    I dunno, overall, I have some fairly big problems with your API.

    I think more than anything else though, its that social requirment
    thing... :) Then again, I've been described by several people
    in my life as a human computer...

    omard-out

    Regards,
    Brett
     



    This archive was generated by hypermail 2.1.5 : Tue Aug 05 2003 - 01:03:21 MDT