>If mortal life is totally meaningless, it would be logical to
>exterminate them for their spare atoms. Mortals, knowing this, will
>refuse to create Singularities. If mortals could bargain with the
>Singularity, it would obviously be to the Singularity's advantage to set
>aside a quadrillionth of computing power for Permutation-City-style
>accomodations, in return for existing at all. But we can't bargain with
>the Singularity until after we've created it and our hold is gone.
>Bearing this in mind, how can you bind the Singularity to the bargain?
>What is the Singularity's logical course of action?
Until the Singularity encounters an atom-shortage, what is the motivation to go out of its way, allocating any computation time, to determine the optimal way to exterminate all sentient life? If a Singularity is efficient on every level then the only time it will bother to destroy humans is when they somehow conflict with the Singularity's objectives.
>Bonus question one: Suppose you have a time machine that can ONLY
>convey the information as to whether or not the Singularity will happen.
> If you change your plans as a result of the indicator, it resends.
>This is one bit of information and thus can be modulated to convey
>messages from the future. How do you negotiate?
I'm not sure I understand this question. The time machine exists in the pre-Singularity period otherwise its information is useless. However, how can such information be useful in negotiating with a Singularity when the Singularity does not exist? The more effective machine would send back information about whether humans will still exist at some distant point in time. This machine would evidence the fidelity of the Singularity to the bargain. However, such a machine would only ensure survival to that distant point in time and not beyond.
>Bonus question two: In both cases above, it is necessary to plausibly
>threaten not to create a Singularity. The only other option, in the
>long run, is exterminating the human race. This has to be able to
>plausibly happen, either as a logical consequence or as an alternate
>future. How do you force yourself to destroy the Earth?
Since the motivation to not create a Singularity is to preserve the continued existence of humans, this option seems unacceptable. We could probably slay all child molesters by nuking the entire planet but we'd also kill all the children (and everyone else) also. The marginal benefit of the action does not outweight the marginal cost.
The clear truth is that humans can not protect themselves from quantitative SIs and definitely not qualitative SIs. The only hope for human minds is to be an integral part of such entities.