From: Samantha Atkins (firstname.lastname@example.org)
Date: Mon Jan 28 2002 - 04:10:21 MST
I think I (and many) understand your question. I am not sure
that you understood some of the answers that might look flippant
but really aren't (although some were flippant imho).
Chen Yixiong, Eric wrote:
>>Obviously the intent of you asking these questions is very serious, which is
>>appreciated in a community such as this. But my opinion is if you want
>>serious answers, then ask the question(s) seriously, being specific, and
>>preferably one question per conversation thread to start with.
I am not sure some of the most important questions *can*
actually be broken down meaningfully into specific, single
> If we do not have meta-goal type thinking to guide us, then how do we do things like formulating ethics and devising philosophy? I
> would like to know this very much. Why do we do what we do, and what guides us to do so? What makes of think of "right" and "wrong",
> "acceptable" and "unacceptable" and many other things?
This is a very good question. What is our ethical basis that is
tied up in and validates (perhaps is partially validated by) our
vision/notion of the good, of what we would acheive and the
means by which it can be acheived?
> If we consider everything relative, then we can never condemn Hitler for gassing Jews nor praise Mother Theresa for helping the
> unfortunate. So, why do we believe in progress, in freedom and in perusing the unknown? How can two nulls make a one?
> I wonder that perhaps here, we would encounter Gödel's Theorem again. How do we know that "something is preferable over nothing
> including even the experience of nothing"? I don't want to know that we wish to survive, but for what reason do we even bother to
> survive? I don't want to know about freedom, but why do we even survive and how do we know we should desire freedom. It seems like
> when we have ignorance we have confusion, but when we have knowledge we have delusions. How do we transcend this?
> Taking the cue from http://sysopmind.com/tmol-faq/tmol-faq.html, why do we get up in the morning? If we do so to experience
> happiness, to increase knowledge and to contribute to altruism, which will lead to a better universe, then how do we know we have a
> better universe. How would such a universe differ from our own? Why would we decide that we should target this "better"?
If we can have a universe where everyone that wants it has the
ability to grow in knowledge, intelligence, creativity, and in
any and all things that they find desirable almost without limit
relative to where we are today, I cannot help but see that as a
huge improvement. If the 70% illiterates and the 99% without
college level education and any computational resources at all
could have access to staggering amounts of computing power and
all the knowledge of humankind (and more) then I consider that a
very happy and desirable goal to work toward. If every single
human being could never ever again have to go without adequate
(and even palatial by today's standsrds) shelter and full
nutrition and could live all their indefintely long years in
perfect health, I consider that very, very good. Being able to
expand our minds individually and collectively far, far beyond
what we can imagine I consider very good and utterly enticing.
Altruism is just a word. Creating a world where every single
one of us can be enriched beyond our dreams is much, much more
than just a word.
We stand on the threshold of being able to do all of these
things. To do them takes both technical breakthroughs and
breakthroughs in ourselves. The above is not how we habitually
see ourselves, the world, and "just the way things are". A lot
of our programming is in the way of our consistently seeing and
working toward such a state and arriving in it.
We are in a bootstrapping mode in some ways. We can see a lot
that is possible. Out of the possibilities we need to weave
visions of futures that draw us deeply, that we desire deeply
enough to be utterly motivated to create if at all possible. It
is not a matter, to me, of sitting around trying to define
"good" first and rigorously prove it is "good" up front. It is
a matter of "where do we want to go an how can we get there and
what can I do to help".
> Would a sentient intelligence do better than us even if it converted the entire universe into computing substrate? One day it would
> have to stop existing, even if we put the day of reckoning away by migrating to other universes accidents can still happen, and
> given eternity, the probability of any improbable event happening will eventually go to one.
Define "do better"? This is the rub. Would you rather have
huge possibilities for growth and happiness for yourself and
everyone else or be where you are now for the rest of your
(probably short) existence? Choose and live your choice.
> Knowing our long-term mortality, what do that do then? If we will have to die some day, then why do we live? [Please avoid viewing
> this from a "pessimistic" point but maintain a neutral one.] Does our happiness, knowledge and other things really matter? Do we
> really have freedom?
That I may die someday doesn't really bother me. What really
bothers me is that given so much possibility to really remake
the circumstances of all of humankind we might be too screwed
up, too stupid, lazy and so on to actually do much of anything
with it other than figure out fancier ways to enslave, exploit
and kill one another and justify our individual and collective
meanness. That bothers me a lot.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:36 MST