> I think this a very good point. I fully agree on the basic idea (although
> I think Robert and I differ on how this interacts with ethics later on).
> This is where we appear to diverge ethically. [snip] But this implies
> that ethically even a very inefficient being that really is a weakest
> link has a right to go on with its existence using resources it owns.
I think the divergence arises of my desire for optimality and/or
greatest complexity (unentropicity). Clearly this tends to
revolve around issues related to consciousness or self-awareness.
The current moral system seems to grant self-aware humans the
right to take the lives of presumably non-self-aware animals
in order to feed themselves. It also allows you to justifiably
take the life of even self-aware individuals if they represent
a threat to your right to life.
The current moral system seems largely "pre-programmed" as the
consilience paper by Edward O. Williams points out. I inherently
reject a pre-programmed morality or ethical system because there
is no guarantee that it is optimal or extropic. Natural evolution
does not guarantee optimality (I could go into a long tirade about
photosynthetic efficiency here but I'll save that for another day).
As the recent James J. Hughes paper from the Jounal of Evolution
and Technology points out, the fundamental concepts of what
an "individual" is and what the "rights" of entities with varying
levels of self-awareness, consciousness, volition, or potential
to have such (e.g. the cryonically suspended) are -- will be
undergoing substantial evolution in the not so distant future as
technologies erode the boxes into which we currently put these concepts.
We see the tip of the iceberg with regard to this now in the animal
My perspective, I think, stems from the fact that I view the fact
that there seems to be no "guarantee" as to why future sophisticated
hyperconscious mega-minds will not view unevolved humans the same
way most humans now view pigs, sheep and cows. I think Anders
perspective may stem from the fact that he desires that there be
a "natural" or "adopted" ethical system in which everyone agrees
to respect the "natural rights" (such as the right to life) of
any-"thing" above a certain level. I think the possibility (or
probability) that what Anders desires may not be achieved is what
drives Eliezer towards a Sys-op solution.
I think democratic societies, particularly those where we increase
the communication bandwidth between individuals (so I know what you
know and you know what I know, without years of debates) may approach
what Anders desires -- because it seems that it will become increasingly
recognized that it will be the optimal solution for "being more with
less". I also think that in such societies force will not be necessary
to achieve the optimal results. If I decide to remain at the butterfly
level and Anders decides live at the Aristoi level, then as a rational
butterfly, I would happily turn my matter and energy over to Anders
for better extropic uses. Anders would most likely run my butterfly
life in his simulation out of respect and gratitude for my having
made the optimal choice. In a pinch he may time slice my reality
and run me only for a second every thousand years, but that will
not really make much of a difference from my subjective viewpoint.
In the world I envision, the fundamental rights are not the "ownership"
of matter or energy but the ownership of a "right to exist" in at
least some form. We are currently constrained to effectively
exist in only one form. As a result we can only develop ethical
systems that are a poor reflection of what may eventually be possible.
So, I do not think my previous proposals were so completely out of
line as Anders felt. In the current reality there are clearly
cases where people are challenging others right to exist and
acting to achieve their views. The right of self-defence
requires that that be stopped. The fundamental flaws in the
earlier reasoning were the huge failure to be able to guarantee
the strategy (leveling Afghanistan) would be successful and
a poor result value from a moral-utilitarian perspective compared
with a simple-utilitarian perspective. That, I think, makes it
clear why I shouldn't be doing military planning...
I think this may bring some closure (at least from my perspective)
to the differing views related to "Extropian morality".
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:14 MDT