> hal@finney.org
>The program was able to work on an abstract system for which it had tools
>to modify it in various ways, and some way of measuring whether the
>modification improved things.
>After two or three iterations the process would hit a limit.
>First, the problem can be seen as a matter of getting stuck on a local
>optimum.
>Something similar can be defined for the self-improving program.
>One way to think of Mike's program's failure is that as the program got
>smarter, it got more complicated.
The same problems arise with considering human-level augmentation. A fruitful place to begin is with an idea familiar to us; underneath most or all of our hopes for a more developed science of mind is the notion that we should be able to observe and control all aspects of our mind. So in the promise of uploading we can see that all of our mind is exposed (as code, as some other representation). Now, I'm not necessarily challenging the idea---just exploring it---but many of seem to think that an upload will have no trouble whatsoever with effecting any desired change, including substantial upgrades. What guarantee do we have that there is no inherent problem in a human-class mind perceiving all of itself? There is of course the question of self-observation; it would certainly help to know how much control our consciousness really has over the rest of the mind.
None of this excludes the possibility of some other intelligence from examining the upload and modifying it; but it is definitely a question if we intend to acquire the self-control that we so desire.