From: Samantha Atkins (samantha@objectent.com)
Date: Thu Apr 17 2003 - 02:38:48 MDT
Keith Elis wrote:
> One of the refrains among AI Singularitarians is this notion of
> 'recursive self-enhancement' -- positive feedback, an AI hacking itself
> into something smarter, etc. Has anyone described what this process
> would look like at the software level? I don't mean as a theory of
> intelligence, or as an architecture of intelligence, but as a matter of
> engineering a recursive process in software. Though people are still
> working on how to code some rudimentary form of intelligence, that's not
> the whole problem. Has a software architecture been described in enough
> detail to settle that a program can successfully modify itself? How in
> the world does it work? Where are the proof-of-concept applets I can
> download? Someone must have shown that recursive self-modifying programs
> are tractable since we talk about recursive self-modifying,
> *intelligent*, and *friendly* programs so often. I'm sure someone has.
> Yudkowsky has much of the market cornered on this topic, but in staying
> abreast of his web pages over the last 6 years, I don't recall an answer
> to this question. Nor did I find anything at singinst.org. If I missed
> it, my apologies.
>
> Keith
>
>
Are you familiar with Genetic Algorithms? You may want to
google it for one way such can be done. Also google Adaptive
Programming, self-modifying code and so on. Many so-called
"higher level" languages aren't very good at writing code or
self-modifying code. But the concept is easy enough.
- s
This archive was generated by hypermail 2.1.5 : Thu Apr 17 2003 - 02:41:34 MDT