From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Sun Jan 27 2002 - 08:17:23 MST
Eliezer wrote:
While I do worry about the problem of the shift from a hostile universe to
the friendly universe eliminating the opportunity for emotions like altruism
except in VR,
### I have two remarks to make about your statement:
1) It might be just trivial carping, but I doubt that a reliably friendly
Universe is possible. Even transhumans will have afflictions and
tribulations of (for us) unimaginable complexity and danger. In fact, it
seems to me that the higher the level of your structural complexity, the
more there are ways of disintegrating - while a stone can be crushed or
melted, a human can be crushed, racked, bastinadoed, harrowed by guilt or
feeling of inadequacy... you get the picture.
2) Assuming that a niche exists where transhumans are fully protected from
anything truly untoward, the ethical value of altruism in this niche becomes
questionable. After all, the reason why our highest-level cortical emotional
analysis centers reliably become wired during extended ontogeny to recognize
altruism as a value per se, is because during phylogeny altruism was
conducive to survival and because of selection of a favorable bias towards
fellows exhibiting it. There is no categorical imperative morality impelling
such a choice.
Still, I like your arguments in favor of transhuman fun. Hope to be able to
try out some of your solutions one day.
Rafal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:36 MST