Not all information can be computed, if one doesn't have the right
inputs. Furthermore, even for stuff that can be computed, it's not
clear there is some maximum computational "depth" (compute cycles over
output + input length) It would be very interesting if you could prove
that a computer has some universal minimum discount rate. That would
be well worth publishing. However, it seems you are a long way from
showing this.
> Why do you think an SI will understand itself any more than we
> understand ourselves? And even if it could, that doesn't mean such
> understanding will lead to much improvement.
>
>Basically, I don't believe that we understand the basics of human
>cognition.Therefore our attempts at self-augmentation have no firm
>basis. We do, however, understand the basics of machine computation:
>we can design and build more powerful computer hardware and software.
>Since we understand this basis already, I believe that an SI can also
>understand it. I believe that an SI with a computer component will be
>able to design and build ever more powerful hardware and software,
>thus increasing its own capabilities. I think that this is likely to
>lead not just to an improvement, but to a rapid feedback process.
Consider an analogy with the world economy. We understand the basics
of this, and we can change it for the better, but this doesn't imply
an explosive improvement. Good changes are hard to find, and each one
usually makes only a minor improvement. It seems that, in contrast,
you imagine that there are a long series of relatively easy to find
"big wins". If it turns out that our minds are rather badly
designed, you may be right. But our minds may be better designed than
you think.
Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/