Ok, I promised I'd shut up, but I won't quite yet ...
> If you can't, on a deep emotional level, see the connection between my
> work and the starving people in the Sudan, then this - from my perspective
> - is an emotional peculiarity on your part, not mine.
I do perceive the connection, of course, both rationally and emotionally.
Your work has a decent chance of increasing the probability that the
Singularity is good for humans, and it's therefore a very important kind of
work. I feel the same way about my own work. AI technology is going to do
a lot of good for a lot of people, someday. I do feel AI will be a
profoundly positive technology for humans, not a negative one like, say,
nuclear weapons, which I wouldn't enjoy working on even if it were
But yet, for reasons that are still not easy for me to articulate, I feel a
bit of discomfort with **solely focusing one's life** on this type of
compassionate activity -- on "helping people by doing things that will help
them in the future but don't affect them at all right now." This is a good
kind of activity to be doing, for sure. But yet, I feel that, in general,
this kind of long-term helping of others can be conducted better if it's
harmonized with a short-term helping of others.
I can partially articulate what's in my mind here via a parable, although
this doesn't catch everything I feel on the subject by any means.
Here's the parable. Suppose you're stuck on a boat in the middle of the
ocean with a bunch of people, and they're really hungry, and the boat is
drifting away from the island that you know is to the east. Suppose you're
the **only** person on the boat who can fish well, and also the only person
who can paddle well. You may be helping the others most by ignoring their
short-term needs (fish) in favor of their longer-term needs (getting to the
island). If you get them to the island, then eventually they'll get to an
island with lots of food on it, a much better situation than being on a
sinking boat with a slightly full stomach.
If the other people don't **realize** the boat is drifting in the wrong
direction, though, because, they don't realize the island is there, then
what? Then they'll just think you're a bit loopy for paddling so hard. And
if they know you're a great fisherman, they'll be annoyed at you for not
helping them get food....
What is this little parable missing? Sociality. If you feed the other
people, they'll be stronger, and maybe they'll be more help in paddling the
boat. Furthermore, if you maintain a friendly relationship with them by
helping them out in ways that they perceive as well as ways that they do
not, then they're more likely to collaborate creatively with you in figuring
out ways to save the situation. Maybe because of their friendship with you,
they'll take your notion that there's an island to the east more seriously,
and they'll think about ways to get there faster, and they'll notice a
current that you didn't notice, floating on which will allow you to get
there faster with less paddling.
The difference here is between the following two attitudes
1) Seeking to, as a lone and noble crusader, save the world in spite of
2) Seeking to cooperatively engage the world in the process of saving
To do 2, it's not enough to do things that **you** perceive are good for
everyone in the long run. You have to gain the good will of others, and
work with them together on things that both they, and you, feel are
Of course, it's impossible and undesirable to have a consensus among all
humans as to what is good and what is bad. So like most things in the human
world, the distinction between 1 and 2 is fuzzy rather than absolute.
I realize that you, Eli, are trying to cooperatively engage the world in the
process of saving itself your way, by publishing your thoughts on Friendly
AI. But I have an inkling that the way to cooperatively engage the world in
the process of saving itself ISN'T to try to convince them to see them your
way through rational argumentation. Rather, it's to try to enter into a
real dialogue where each side (transhumanists vs. normal people in this
case) makes a deep and genuine effort to understand the perspective of the
> Since I *do*
> perceive the connection on a deep emotional level, I live free of the fog
> of guilt that pervades the First World.
I don't feel very much guilt, actually. I'm not sure what the word is for
what I feel.... Our language for describing emotions is very crude...
that's what music's for, I guess ;>
What I mean is: I am not made unhappy by observing my own reaction to the
starving people in the Third World. I don't think my own reaction, and the
way I conduct my life, is reprehensible. I'm just interested in finding
ways to do even better.
> There is absolutely *nothing* I
> could do that would help the rest of the world more than what I am already
In my view, given the numerous uncertainties as to the timing and
qualitative nature of the Singularity, it is irrational of you to hold to
this view with such immense certainty.
Actually, I honestly feel that if you spent a year teaching kids in the
Sudan, you'd probably end up contributing MORE to the world than if you just
kept doing exactly what you're doing now. You'd gain a different sort of
understanding of human nature, and a different sort of connection with
people, which would enrich your work in a lot of ways that you can hardly
imagine at the moment. Not to mention a healthy respect for indoor
> I feel a need to do something about the starving people in the
> Sudan and I AM DOING IT; that need is now FULLY SATISFIED. Rather than
> striving for "balance" between personal life and altruism, I have decided
> to allow my desire to be altruistic to *completely* dictate my life-path.
Well, this part of the discussion is getting kind of personal, so I'll drop
it. Except to say that, in my view, sometimes this kind of "others over
self" attitude is psychologically healthy, and sometimes not.
> I think, Ben, that we react to certain problems in different ways. You
> see starving people in Sudan and think to yourself: "Is what I'm doing
> really helping? Maybe I should just give it all away."
I don't think anything that extreme, really. Not very often, anyway. I
just think that it might be better if I shifted the balance of my life a
little more in the direction of doing concretely helpful things for people
Although this isn't practical for me at the moment, I do plan to effect this
change in my life at some point in the next few years.
> There was a time
> when I felt guilty on seeing starving people in Sudan, and what I thought
> to myself was: "Could I be working a little harder? Am I making the
> wrong tradeoff between rest time and work time, or between energy
> expenditure and energy recovery?" In other words, I saw starving people
> in Sudan and felt guilty - so I worked faster. (Nowadays I'm a bit more
> confident that I'm at the right equilibrium, and being guilty does
> dissipate mental energy, so I try to implement a policy of only
> reevaluating work habits when new information arrives.)
> > But this isn't the kind of
> > thing for which one can make a logical argument.
> *What*? Oh, never mind. (Now, if you were to say, "I'm sure there is a
> logical argument for this, but I haven't learned to formalize my gut
> feelings," that would be another matter.)
Well there were several things underlying that statement...
1) It is true that it's difficult for me to formalize my gut feelings on
2) Reason is only useful when there is adequate, and adequately crisp, data
for it to operate upon. In the presence of very fuzzy or very scant data,
logical inference is of less value than cruder "intuition-type" methods....
Questions like "Is it better to focus all one's compassionate efforts on
long-term things or to balance long and short term compassionate efforts"
are questions on which there is not sufficiently crisp data for logical
reasoning to be worthwhile.
3) In areas where there is scant or fuzzy data, our emotions tend to rule
us. Ethics and "how to live one's life" are definitely areas that fit this
criterion! Arguing about this stuff is sort of like arguing about
religion... not useful except in special cases.... Perhaps this dialogue is
one of those special cases, I'm not sure...
It's certainly clear to me that you and I could argue about this stuff until
we were both blue in the face and still not agree with each other, anyway!!
Anyway... enough of that... I should get some sleep tonight!
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT