1. The 'problem of superintelligence' can be separated into
two subproblems, the "general theory of self-enhancement"
and the "problem of the initial conditions". The first
subproblem is a problem in theoretical computer science,
the second is an ethical and political problem.
2. Self-enhancement: It seems likely to me that there is
an optimal strategy of intelligence increase which
cannot be bettered except by luck or by working solely
within a particular problem domain, and that this
strategy is in some way isomorphic to calculating
successive approximations to Chaitin's halting
probability for a Turing machine given random input.
3. If this is so, then once this strategy is known,
winning the intelligence race may after all boil down
to hardware issues of speed and size (and possibly to
issues of physics, if there are physical processes
which can act as oracles that compute trans-Turing
functions).
4. I haven't at all addressed how to apply superintelligence
in the abstract to specific problems. I would guess that
this is a conceptual problem (having to do with grounding
the meanings of inputs, symbols, and outputs) which only
has to be solved once, rather than something which itself
is capable of endless enhancement.
5. Initial conditions: For an entity with goals or values,
intelligence is just another tool for the realization
of goals. It seems that a self-enhancing intelligence
could still reach superintelligence having started with
almost *any* set of goals; the only constraint is that
the pursuit of those goals should not hinder the process
of self-enhancement.
6. I think the best observation we have on this topic is
Eliezer's, that the utilitarian goal of superintelligence
can be pursued as a subgoal of a 'Friendliness' supergoal.
As a design principle this leaves a lot of questions
unanswered - what, explicitly, should the Friendly
supergoal be? how can we ensure that it is *interpreted*
in a genuinely Friendly fashion? how can we harness
superintelligence, not just to the realization of goals,
but to the creation of design principles more likely to
issue in what we really want? - but it's the best starting
point I've seen. Whether there is such a thing as
*superintelligence that is not goal-driven* is another
important question.
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:37 MDT