Robin Hanson wrote:
> Dan Clemmensen writes:
> >... At some point, the SI may
> >choose to add more humans, either volunteers or draftees, and grant some
> >level of autonomous identity, less than, equal to, or greater than we have
> >now. However, it's IMO impossible to assign a probability to any action the
> >SI may choose to take that isn't precluded by the laws of physics. That's
> >why I'm very interested in prognostications up to the advent of the SI, and
> >relatively uninterested in the post-SI era.
> Does saying "IMO" mean you don't think you can articulate *why* one can't
> "assign a probability"? Taken literally, your claim seems clearly false.
> So what exactly do you mean?
You are correct that my statement is clearly false. I can assign any probability I choose in the range 0 to 1 inclusive. What I meant was that I can think of no reasonable way to defend any particular choice. Hmm, where did I put my articulator? I know it's around here somewhere .... Ahh, yes. Ahem:
Since the SI will be vastly more intelligent than humans, IMO we may not be able to comprehend its motivations, much less predict them. The SI will be so smart that its actions are constrained only by the laws of physics, and it will choose a course of action based on its motivations.
Can we examine a set of feasible scenarios? (i.e. scenarios not prohibited by the laws of physics?) Of course. But we can't enumerate them all, and we can't IMO even rationally select a methodology with which to weigh them against each other.
Let's look at a range of possible motivations and actions: M=motivation, A-action.
M: SI wants to maximize its intelligence quickly A: SI restructures the mass of the solar system into a computer
M: SI wants to maximize intellectual diversity A: SI force-uploads everybody and starts GP on copies.
M: SI wants to maximize its power long-term
A: SI sends replicator probes to in all directions.
M: SI wants to die.
A: SI terminates itself.
M: SI wants to die.
M: SI decides SIs are a BAD THING.
A: SI terminates self and humanity.
M: SI wants to see the end of the universe ASAP(having read both
Douglas Adams and Tipler).
A: SI accelerates self (self equals all mass in solar system?)
at high continuous accel to enhance time dilation.
M: SI decides humanity is wonderful as it is. A: SI goes covert, suppresses all other SIs and research leading to them.
OK, how do I assign probabilities to the motivations, and how do I assign probabilities that a listed action follows from the motivation?
Meta-problem (extra credit): Are the human concepts of motivations, actions and the relationships between them applicable to an SI?