Re: Singularity: Individual, Borg, Death?

Nick Bostrom (bostrom@ndirect.co.uk)
Sun, 6 Dec 1998 16:32:18 +0000

Eliezer S. Yudkowsky wrote:

> > > Rational Assumptions:
> > > RA1. I don't know what the objective morality is and neither do you.
> > [snip]
> > > LA1 and RA1, called "Externalism", are not part of the Singularity logic per
> > > se; these are simply the assumptions required to rationally debate
> > > morality.
> >
> > I don't see why a rational debate about morality would be impossible
> > if you or I knew the "objective morality". People often
> > rationally debate issues even when one party already knows where the
> > truth lies.

> For two people to argue rationally, it is necessary for each to accept the
> possibility of being wrong, or they won't listen. Maybe this doesn't hold on
> the other side of dawn, but it surely holds true for the human race.

I have often rationally debated positions that I thought I knew were correct. I have even on occasion been involved in rational debates where *both* parties agreed where the truth lies, but one party decided to play the devil's advocate. The game then consists of coming up with good arguments for or against the proposition that both recognize as true. So I don't agree with you that if you or I knew the "objective morality" then it would be impossible for us to rationally debate about morality. Isn't that simply false?

> First of all, I don't know that it is wrong to torture innocent people for a
> small amount of fun. It is, with at least 90% probability, not right. I
> can't be at all certain that "2+2 = 4", but I can be almost totally certain
> that "2+2 does not uniquely equal 83.47".

The way I use the word "know" you don't have to be exactly 100% certain about something in order to know it. But in any case, nothing in my argument changes if you use a probabilistic statement instead.

> > For example, if it is morally preferred
> > that the people who are currently alive get the chance to survive
> > into the postsingularity world, then we would have to take this
> > desideratum into account when deciding when and how hard to push for
> > the singularity.
>
> Not at all! If that is really and truly and objectively the moral thing to
> do, then we can rely on the Post-Singularity Entities to be bound by the same
> reasoning. If the reasoning is wrong, the PSEs won't be bound by it. If the
> PSEs aren't bound by morality, we have a REAL problem

Indeed. And this is another point where I seem to disagree with you. I am not at all certain that being superintelligent implies being moral. Certainly there are very intelligent humans that are also very wicked; I don't see why once you pass a certain threshold of intelligence then it is no longer possible to be morally bad. What I might agree with, is that once you are sufficiently intelligent then you should be able to recognize what's good and what's bad. But whether you are motivated to act in accordance with these moral convictions is a different question. What weight you give to moral imperatives in planning your actions depends on how altruistic/moral you are. We should therefore make sure that we build in strong moral drives into the superintelligences. (Presumably, we would also want to link these moral drives to a moral system that places a great value on human survival; because that way we would increase our own chances of survival.)

>, but I don't see any way
> of finding this out short of trying it.

How to control an SI? Well, I think it *might* be possible through programming the right values into the SIs, but let's not go into that now.

> Or did you mean that we should push
> faster and harder for the Singularity, given that 150,000 people die every day?

That is a consideration, though we have to put it in perspective, i.e. consider it in the context of the total number of sentiences that have died or may come to live.

> > In the hypothetical case where we could dramatically
> > increase the chances that we and all other humans would survive, by
> > paying the relatively small price of postponing the singularity one
> > year, then I feel pretty sure that the morally right thing to do
> > would be to wait one year.
>
> For me, it would depend on how it affected the chance of Singularity. I don't
> morally care when the Singularity happens, as long as it's in the next
> thousand years or so. After all, it's been fifteen billion years already; the
> tradeoff between time and probability is all in favor of probability. From my
> understanding of causality, "urgency" is quite likely to be a human
> perversion. So why am I in such a tearing hurry? Because the longer the
> Singularity takes, the more likely that humanity will wipe itself out.

That is also a valid consideration.

> I
> won't go so far as to say that I'm in a hurry for your sake, not the
> Singularity's - although I would personally prefer that my grandparents make
> it - but delay hurts everyone.

Unless a one or two year delay would give us time to fine-tune the goal systems of the SIs, so that they would be more likely to be moral (and kind to us humans).

> > In reality there could well be some kind of tradeoff like that.
>
> There's a far better chance that delay makes things much, much worse.

I think it will all depend on the circumstances at the time. For example, what the state of art of nanotechnology is then. But you can't say that sooner is *always* better, although it may be a good rule of thumb. Clearly there are cases where it's more prudent to take more precausions before launch. And in the case of the singularity, we'd seem to be well advised to take as many precausions as we have time for.

>
> > It's
> > good if superintelligent posthumans are created, but it's also good
> > if *we* get to become such beings. And that can in some cases impose
> > moral obligations on us to make sure that we ourselves can survive
> > the singularity.
>
> Why not leave the moral obligations to the SIs, rather than trying (futilely
> and fatally) to impose your moral guesses on them?

Because, as I said above, if we build them in the wrong way they may not be moral. Plus: whether it's moral or not, we would want to make sure that they are kind to us humans and allow us to upload.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk Department of Philosophy, Logic and Scientific Method London School of Economics