From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 16 2003 - 16:00:23 MDT
Keith Elis wrote:
> One of the refrains among AI Singularitarians is this notion of
> 'recursive self-enhancement' -- positive feedback, an AI hacking itself
> into something smarter, etc. Has anyone described what this process
> would look like at the software level? I don't mean as a theory of
> intelligence, or as an architecture of intelligence, but as a matter of
> engineering a recursive process in software. Though people are still
> working on how to code some rudimentary form of intelligence, that's not
> the whole problem. Has a software architecture been described in enough
> detail to settle that a program can successfully modify itself? How in
> the world does it work? Where are the proof-of-concept applets I can
> download? Someone must have shown that recursive self-modifying programs
> are tractable since we talk about recursive self-modifying,
> *intelligent*, and *friendly* programs so often. I'm sure someone has.
> Yudkowsky has much of the market cornered on this topic, but in staying
> abreast of his web pages over the last 6 years, I don't recall an answer
> to this question. Nor did I find anything at singinst.org. If I missed
> it, my apologies.
"Levels of Organization in General Intelligence, Part III: Seed AI"
http://singinst.org/LOGI/seedAI.html
If you've already read this, let me know. LOGI speaks not of small
recursively self-modifying code fragments, but entire recursively
self-modifying minds - if it is something that can be scaled down to an
Euriskoish applet, LOGI does not speak of attempting it, and I would
advise against it very very strongly due to FAI issues.
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Apr 16 2003 - 16:08:04 MDT