Is I understand it, one of the hard parts of GA is a rigorous specification of
the goal toward which you are optimizing. As it happens, speed and space efficiency
are just about the easiest goals to specify, and the early GA success stories
frequently mention speed and space optimization.
> More probable seems a subtler loss of the ability to control the specific
> direction algorithmic evolution takes as it progresses in complexity. Perhaps
> there is a dynamic tension between control and evolution that could offset
> much of the speed-derived gains, such that the necessity to intervene in the
> process to check-and-direct with human (even augmented human) intervention in
> the algorithmic evolutionary process imposes a relatively low upper limit on
> at least the first stages of "SI fetal development".
I agree that goal-setting is the big problem, but I'm more optimistic about
increasing the goal-setting efficiency. In my favorite model, the SI is a
collaboration, and the human is doing the goal-setting. The effectiveness of
this process is dramatically and continuously enhanced by improvements in
the data presentation and visualization algorithms that let the human
understand the problem to be solved.