Re: Singularity: Individual, Borg, Death?

Nick Bostrom (bostrom@ndirect.co.uk)
Fri, 4 Dec 1998 16:24:24 +0000

I think Eliezer's interesting argument is unsound, because I think one of it's premisses (RA1) is false.

Eliezer wrote:

>Now I'm not
> dumb enough to think I have the vaguest idea what it's all for
[snip]
> Rational Assumptions:
> RA1. I don't know what the objective morality is and neither do you.
> This distinguishes from past philosophies which have attempted to "prove"
> their arguments using elaborate and spurious logic. One does not "prove" a
> morality; one assigns probabilities.
[snip]
> LA1 and RA1, called "Externalism", are not part of the Singularity logic per
> se; these are simply the assumptions required to rationally debate
> morality.

I don't see why a rational debate about morality would be impossible if you or I knew the "objective morality". People often rationally debate issues even when one party already knows where the truth lies.

As for RA1, I think it could well be argued that it is false. We may not know in detail and with certainty what the moral facts are (so sure we'd want to assign probabilities), but it doesn't follow that we know nothing about then at all. In fact, probably all the people on this list know that it is wrong to torture innocent people for a small amount of fun. We could no doubt write down a long list of moral statements that we would all agree are true. Do you mean that we all suffer from a huge illusion, and that we are all totally mistaken in believing these moral propositions?

Now, if we accept the position that maybe we already know quite a few simple moral truths, then your chain of reasoning is broken. No longer is it clear that the morally preferred action is to cause a singularity no matter what. For example, if it is morally preferred that the people who are currently alive get the chance to survive into the postsingularity world, then we would have to take this desideratum into account when deciding when and how hard to push for the singularity. In the hypothetical case where we could dramatically increase the chances that we and all other humans would survive, by paying the relatively small price of postponing the singularity one year, then I feel pretty sure that the morally right thing to do would be to wait one year.

In reality there could well be some kind of tradeoff like that. It's good if superintelligent posthumans are created, but it's also good if *we* get to become such beings. And that can in some cases impose moral obligations on us to make sure that we ourselves can survive the singularity.

You might say that our human morality - our desire that *we* survive - is an arbitrary effect of our evolutionary history. Maybe so, but I don't see the relevance. If our morality is in that sense arbitrary, so what? You could say that the laws of physics are arbitrary, but that does not make them any less real. Don't forget that "moral" and "good" are words in a human language. It's not surprising, then, if their meaning is also in some way connected to human concepts and anthropocentric concerns.

~~~
My own view is that it is indeed a very wise idea that humankind tries to put itself in a position where it will be better able than today to figure out what to do next, for example through intelligence amplification, education, research, information technology, collaborative information filtering, idea futures etc. - and ultimately by building superintelligence; hopefully by that time we will have been able to think through what could go wrong a little more so we know how to do it in a way that would allow ourselves to survive and to upload.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk Department of Philosophy, Logic and Scientific Method London School of Economics