> O'Regan, Emlyn <Emlyn.ORegan@actew.com.au> Wrote:
> >Say you were a super super ... super intelligence (S^NI), modified
> >all comparison with the gaussian version of yourself. After a
> particular new
> >modification to jump you up to a new level of intelligence, you find
> >you are so awesomely intelligent that you can predict with 99.99%
> >the outcome of any action that you might consider,
John K Clark wrote:
> It could never happen. You may be far more intelligent than a human but
> the thing
> you're trying to figure out, yourself, is far more complex. You'd be no
> better off
> than humans are at predicting what you'd do next, although you could do
> we can't, you could predict what we puny humans would do next.
I don't think that's actually true. A major difference between an SI and a human is that the SI is designed by itself, in fact it is designed by the previous, stupider version of itself. We humans did not design and build ourselves. SIs will have a much better understanding of themselves than humans (although maybe not complete understanding).