Re: Future Technologies of Death

Nick Bostrom (
Tue, 30 Dec 1997 01:15:10 +0000

Alexander 'Sasha' Chislenko wrote:

> Sometimes, one of the children may have resolved the problem that
> the whole batch of them was created (the task could be, for example,
> improving of default communication protocols between intelligent nodes
> in a knowledge network - i.e. development of a basic ethical system).
> After the solution is found and tested, it can be implemented, and the
> rest of the children can be suspended or killed in mid-execution.
> They won't feel "hurt" , and neither will the environment - so what's the
> problem?

I suppose that in some people's ethics there would be a problem here.
If you live in the desert and nobody else knows you exist, and then
somebody shoot you in the head from behind, then it seems (to much of
conventional ethics) that you have been done wrong, even though you
are not around to complain about it and nobody else cares about you.

When you say "Nobody will feel hurt, so what's the problem?", does
that mean that you are a hedonist? If so, then the optimal
organization of the world might be one where there is some amount of
emotionless AI, and the rest of the available matter is transformed
into pleasure-maximal structures -- say simulations of
human brains having orgasms, but highly optimized and without
unecessary cognitive functions.

> (I have a problem here, with entities trying to
> exercise their freedom and get rid of their unwanted parts, with these
> parts disagreeing. Could I shave off my moustache if it was intelligent
> and arguing for its right to continue its existence in its natural habitat?
> Or I could only create a clone of myself without the moustache, leaving
> the moustached original enslaved to a tuft of intelligent hair. How intelligent
> do we want the [sub-]entities to be to grant them such rights? )

Interesting question. I think that Anders has the rule of thumb that
an entity has rights if and only if it can demand them. That rule
has some repugnant consequences though. It seems to imply that we
could torture animals and butch humans who are retarded. (Perhaps the
rule is only supposed to state a sufficient condition for having
rights, not a necessary one?)

> Even in this situation of infinite resources though, all interesting and efficient
> things/beings will be distinct from the ones created under "Rights of Continuing
> Existence" or "Rights of Implementation" Acts. Those entities will be all
> garbage, and every single one of them that has any intelligence will be aware
> of it. They will represent practically all existing structures at any time, and may
> occupy any point in the Universe, except that on its semantic map they will
> never be on the frontier, always banished to the inner semantic junkyard/ the Slow
> Zone, knowing that they are completely unneeded, their goals have no meaning,
> and their wishes to attach themselves to anything else, though always granted,
> create nothing but useless, ever-suffering cripples.

But then again, they may be perfectly content with watching TV, or
looking at their navels. There might be a pill that rids them of
their extropian urge for achievement.

BTW, we have the same problem today, with an increasing number of
old, decrepit people vegetating away in the old people's homes. These
people take up resources that could be used to sustain
vibrant, productive, life-enjoying young men and women. But nobody
seriously suggests that we massacre the old.

> If this all starts sounding like a bizarre dream, this must be for a reason. I think
> we should drop the remnants of anthropomorphism. We made a step in this
> direction when we started discussing evolution instead of static identities, and
> another step - with transition from structure threads to goal threads as the
> subjects of progress.

Yes, I think that the view that in the future values will serve as
the principium individuationis has much to be said for it.

Nick Bostrom

*Visit my transhumanist web site*