Re: sentient rights (was RE: Battleground God)

From: Samantha Atkins (
Date: Fri Feb 22 2002 - 01:30:47 MST wrote:

> Anders writes:
>>The principles do not seem to be enough to constrain an ethical system;
>>they do not form a set of ethical axioms or constrain the basis for
>>extropian ethics. They certainly have ethical content, but this content
>>deals more with desirability of different things than the core
>>"mechanics" of an ethical system.
> But doesn't Extropianism give us a handle by which to judge the
> desirability of different world outcomes? And doesn't this, in itself,
> constitute an ethical framework?

Actually, without criteria as to what is and is not an increase
of "extropy" and a better handle on why this is the greatest
good, I don't see how it gives a way to judge the desirability
of different outcomes except in a few areas. To broaden it will
imo require a broader (more detailed) version of what we wish
the future to look like and why than is present currently in the
Extropian community generally. I think a lot of us are "working
on it" in our various ways formally and informally.

Generally, the ability to judge relative desirability can only
exist within a fairly grounded idea of what the good consists of
and how it applies to various situations. It is not a
free-standing ability. The to arrive at a choice between
alternatives is a consequence of an ethical system rather than
its foundation.

> It is true that it does tell us how to get there. Even if we agree that
> extropy is a desirable goal, we may not agree about what are the best
> practical decisions on a day to day basis.

I assume there is a missing "not" in the first sentence.
Extropy is a quality/capacity not a "goal". Increasing extropy
is a goal but not necessarily the only important goal or the one
that fully subsumes all others. The 7 Extropian principles are
qualities and trends mostly and are not sufficient to fully
flesh out a system of ethics.

>>Personally I would say that this is not a flaw. Extropianism rather
>>inherits the ethical underpinnings of its parent philosophies of
>>libertarianism and humanism (a kind of philosophical object
>>inheritance); it is compatible with most versions of them, and does not
>>as expressed in the principles have to redo all the immense work that
>>has been done on expressing ethics and politics elsewhere. It is a bit
>>like how Robert Nozick simply starts _Anarchy, State, Utopia_ by simply
>>assuming certain rights - the book is not about deriving them, it is
>>what conclusions can be made *after* they have been derived.

Well, "delegation" might be a clearer analogy than "inheritance"
as we delegate most ethical decisions and general biases to
something other than what today exists as part of Extropianism.
    I disagree that immense work has been done on the basis of
ethical systems. A lot of work has been done on the basis of
just assuming a particular set of ethical roots somehow existed
or made sense or were axiomatic. As more and more of the root
context changes we will find that the ethical root presumptions
must be re-examined and re-grounded.

> I am not so comfortable thinking that we can graft conventional
> libertarianism onto Extropianism, or that we can start with libertarian
> ethics as a foundation for our Extropian ethical system. Haven't Max
> and others attempted to distance themselves from a strict libertarianism
> in order to open the movement to a wider range of political philosophies?

Me either. We would be starting something with a shaky
foundation (true of all current ethical systems not just this
one) that more or less works given current biases and context
and expect it to hold when the context shifts rapidly or even to
guide us in shifting that context. It is very unlikely to be up
to the job.


>>One can try combining different ethical theories with extropianism and
>>see what happens. I would say that utilitarianism and extropianism are

>>not a very successful combination; such an extropian utilitarianism
>>would either have to be based on maximizing extropy or have to show that
>>increasing personal extropy and increasing utility are identical. In any
>>case it would tend to run over indiviuals in the pursuit of
>>maximization, and it seems hard to combine with the self organization
>>principle in the old version of the principles. A rights based form of
>>extropianism seems far more consistent, although we still have to find a
>>derivation of rights that convinces.
> > One of the big question marks in the Principles which we did not explore
> much is whether they should be seen as collective or individual. When we
> seek to maximize extropy, as defined by the Principles, are we trying
> to do so each of us individually for ourselves, or for society and the
> world as a whole? Is my goal a world with maximal extropy, or is it a
> world in which I personally have maximized my potentials? I don't see
> the Principles as giving a clear guideline for answering this question.

I don't see any way to fully maximize my potential without
maximizing the potential of the context (the world and other
people) I find myself within. The boundary between this "self"
and others appears to me to be more porous than many are
culturally programmed to believe.

> This is perhaps the most fundamental ethical question we face. It is
> the difference between being generous and being selfish; between being
> trustworthy and being a cheat; between being honest and lying for self
> benefit. If I can benefit myself by harming another, without getting
> caught, should I do so? It arguably maximizes my own potentials for
> extropy, but also arguably reduces the net extropy for the two of us.

Generous and selfish are not fundamental ethical philosophical
constructs. They are quite sloppy and value-laden terms. The
beg the more fundamental questions of what the "good" is and to
what degree it is contextual and so on. Most of the questions
in that paragraph are based on free-floating assumptions.

> Although Extropianism is often seen as an individualistic philosophy, I
> think most of us would agree that from the ethical perspective, we care
> about more than our own personal benefit. We want to see a world where
> the potentials promoted by Extropianism are available to as many people
> as possible. I don't know if many of us would go so far as to say that
> we would sacrifice ourselves if it increased the net extropy of the world,
> but we are far from being dedicated only to our own selfish goals.

I don't draw that much of a distinction between what is good for
me and what is good for humanity. I don't believe they need to
conflict generally.

> Of course in many cases these two extremes do not actually lead to
> different strategies for day to day life. Often we can do good by
> doing well. We behave in a trustworthy and honest and unselfish way,
> and in the long run we benefit directly and personally by these actions.
> So to some extent we can get away with ignoring the issue.

This is not "ignoring the issue". It is nothing that it is
quite often a non-issue that human philosophers keep stubbing
their ethical toes on.

> But then for each of us there will come times when we are tested and
> tempted. You find someone's wallet with money in it; you are carrying a
> load of trash to the dump and find a secluded spot where you could toss
> it for free; you are offered to join an Internet pyramid scheme which
> will inevitably leave the latecomers with severe losses. Then you have
> to decide whether your ethical system is just about you, or about the
> world as a whole.

Such situations are really not terribly relevant to general
ethical systems. However, each of the proposed dilemna is no
dilemna at all when the costs are worked and implications are
worked out a few steps further than first inclination. Of
course it is needful to have a few supergoals broader than
immediate seeming gratification.

- samantha

This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:40 MST