From: Keith Elis (hagbard@ix.netcom.com)
Date: Wed Apr 16 2003 - 15:26:09 MDT
One of the refrains among AI Singularitarians is this notion of
'recursive self-enhancement' -- positive feedback, an AI hacking itself
into something smarter, etc. Has anyone described what this process
would look like at the software level? I don't mean as a theory of
intelligence, or as an architecture of intelligence, but as a matter of
engineering a recursive process in software. Though people are still
working on how to code some rudimentary form of intelligence, that's not
the whole problem. Has a software architecture been described in enough
detail to settle that a program can successfully modify itself? How in
the world does it work? Where are the proof-of-concept applets I can
download? Someone must have shown that recursive self-modifying programs
are tractable since we talk about recursive self-modifying,
*intelligent*, and *friendly* programs so often. I'm sure someone has.
Yudkowsky has much of the market cornered on this topic, but in staying
abreast of his web pages over the last 6 years, I don't recall an answer
to this question. Nor did I find anything at singinst.org. If I missed
it, my apologies.
Keith
This archive was generated by hypermail 2.1.5 : Wed Apr 16 2003 - 15:33:05 MDT