Re: Six theses on superintelligence

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Feb 09 2001 - 12:29:47 MST


"Mitchell Porter" <mitchtemporarily@hotmail.com> writes:

> 1. The 'problem of superintelligence' can be separated into
> two subproblems, the "general theory of self-enhancement"
> and the "problem of the initial conditions". The first
> subproblem is a problem in theoretical computer science,
> the second is an ethical and political problem.

At least it is a workable division of the two main bugaboos in the
discussion.
 
> 2. Self-enhancement: It seems likely to me that there is
> an optimal strategy of intelligence increase which
> cannot be bettered except by luck or by working solely
> within a particular problem domain, and that this
> strategy is in some way isomorphic to calculating
> successive approximations to Chaitin's halting
> probability for a Turing machine given random input.

Why is this isomorphic to Chaitin approximations? I might have had too
little sleep for the last nights, but it doesn't seem clear to me.

I'm not as certain as you are that there exists an unique optimal
strategy. Without working within a certain problem domain the no free
lunch theorems get you. Taking the problem domain to be 'the entire
physical universe' doesn't really help, since you also have to include
the probability distribution of the environment, and this will be very
dependent not just on the interests but also actions of the being.

> 3. If this is so, then once this strategy is known,
> winning the intelligence race may after all boil down
> to hardware issues of speed and size (and possibly to
> issues of physics, if there are physical processes
> which can act as oracles that compute trans-Turing
> functions).

What if this strategy is hard to compute efficiently, and different
choices in initial conditions will produce noticeable differences in
performance?

> 5. Initial conditions: For an entity with goals or values,
> intelligence is just another tool for the realization
> of goals. It seems that a self-enhancing intelligence
> could still reach superintelligence having started with
> almost *any* set of goals; the only constraint is that
> the pursuit of those goals should not hinder the process
> of self-enhancement.

Some goals are not much helped by intelligence beyond a certain level
(like, say, gardening), so the self-enhancement process would peter
out before it reached any strong limits.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:37 MDT