On Fri, Feb 09, 2001 at 02:49:35AM +0000, Mitchell Porter wrote:
> 5. Initial conditions: For an entity with goals or values,
> intelligence is just another tool for the realization
> of goals. It seems that a self-enhancing intelligence
> could still reach superintelligence having started with
> almost *any* set of goals; the only constraint is that
> the pursuit of those goals should not hinder the process
> of self-enhancement.
I'm not sure I agree.
My take on human intelligence (our only real existence proof to date)
is that human beings are *not* very intelligent. We're marginally
intelligent -- sufficiently so to act as a substrate for the transfer
and evolution of ideas -- and it's this evolution of ideas (memes) that
give us the appearance of intelligence. It takes a lot of human beings
to generate new memes successfully. If we *do* develop an AI capable
of enhancing itself, (a) it will be a descendent of our current memes
that does the job, and (b) the AI will itself be a vehicle for such memetic
evolution. Think in terms of the intelligence evolving its own goals,
rather than the goal-driven evolution leading to the pursuit of enhanced
(I'm tying myself in knots here, so I'll stop for a while.)
> 6. I think the best observation we have on this topic is
> Eliezer's, that the utilitarian goal of superintelligence
> can be pursued as a subgoal of a 'Friendliness' supergoal.
> As a design principle this leaves a lot of questions
> unanswered - what, explicitly, should the Friendly
> supergoal be? how can we ensure that it is *interpreted*
> in a genuinely Friendly fashion? how can we harness
> superintelligence, not just to the realization of goals,
> but to the creation of design principles more likely to
> issue in what we really want? - but it's the best starting
> point I've seen. Whether there is such a thing as
> *superintelligence that is not goal-driven* is another
> important question.
We don't know any superintelligences yet, but we know quite a lot of
(rather similar) intelligences. What goals drive *us*, and why?
If we can answer that question we may be able to discuss how SI's
might discover or evolve their own goals.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:37 MDT