From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Thu Jan 17 2002 - 12:14:06 MST
Robert J. Bradbury [mailto:bradbury@aeiveos.com] wrote:
On Wed, 16 Jan 2002, Wei Dai wrote:
> Why the assumption that the seed will betray the parent?
An extropic perspective would evaluate whether the parent
or the child were contributing "more" to information content
of the universe (where judging the "value" of such information
content is of course "relatively" subjective).
A "pure" extopic perspective would not consider the relative
value of self-conscious moral actors. It would simply create
valuations on the basis of results produced and allocate
resources accordingly. If one occupies resources that are
deemed to be used unproductively, one can expect to have
such resourcs reallocated for more productive purposes.
I.e. -- "You are the weakest link, Good Bye".
### I keep wondering what would be the material carrier of the extropic
perspective as defined by you. You seem to refer to the "extropic
perspective" as something independent from the "self-conscious moral
actors", yet I imagine that only such self-conscious entities would be able
to adopt it. Should that happen, their long-term survival might be adversely
affected (as in "no consideration for the relative value of self-conscious
moral actors").
I would tend to have a different opinion on the ultimate source of such
high-level ethical systems. Our way of thinking about values developed
during milions of years of evolution, which favored the survival of minds
somewhat selfish, but able to cooperate. Our highest ethical principles turn
out to be (largely) the optimal traits needed for very-long term survival in
a hunter gatherer society, modified by a few thousand years of memetic
evolution in more complex societies, again boosting the survival of persons
(and societies) with the right memes.
Presumably, the same pattern will hold for transhuman entities - their
ethical positions will evolve (perhaps by design) for maximum survival
advantage. Since aggression against large and advanced entities (such as
your parent civilization) can be dangerous, the likelihood of such
aggression would depend on how much benefit would accrue to the attacker,
versus the chance of a catastrophic failure (as in a counterattack by the
parent civilization). I think it is unlikely that many cases of patricide
will occur between the stars, simply because it is not good for transhuman
survival. In the long run, it pays to be nice, as long as behavior-tracking
and a few other procedures, are possible.
I also think that the ethical interpretation of extropy (again, for the
purpose of long-term survival) implies a categorical injunction against the
unwished-for destruction of innnocent sentience. But this is a different
subject altogether.
Rafal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:35 MST