From: Samantha Atkins (email@example.com)
Date: Mon Jan 28 2002 - 22:04:37 MST
> In a message dated 1/28/02 3:29:57 AM, firstname.lastname@example.org writes:
>>This is, imho, one of her major blunders. She claimed that an
>>immortal robot would have no need for ethics and no standard of
>>values. This speaks rather badly for our future SI, or even for
>>ourselves. Values are a matter of choice of what one most cares
>>about acheiving, not simply and only what one most needs to
>>continue living when death is an alternative and the ultimate
>>disvalue. Effectively immortal beings can thus also have values
>>as can software and robotic sentiences.
> Rand's terms are a little misleading here. "Values" means not
> things you want, but things you *have* to want to be you.
> (Also, "immortal" in this context means cannot die, not merely
> eternally youthful or some such).
I don't think that is what she meant. She was looking for a
theory of value, of what values are "good for" and what makes
values rational/objective. In the process she limited herself
to a subset of values imho. Which I guess could be what you are
> Translated into more conventional language, she's saying that
> an immortal robot could choose any value set, and implying
> that as a consequence its values would be bizzare and meaningless
> and probably likely to become more so over time. Given that the robot was
> non-reproductive, I would agree with her, and expect a universe
> of such things to be filled with things chasing stuff like perfect
> Go games and interesting methods to torture small animals.
So, what does this say, if correct, about ourselves once we get
to the point where little less than something that wiped out
several light years of our best structures and backups could
kill us? Are we then condemned to have only utterly arbitrary
and "bizarre and meaningless" values? Do we have to keep the
possibility of final death ever close to stay sane?
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:37 MST