"Eliezer S. Yudkowsky" wrote:
> "Michael S. Lorrey" wrote:
> > You create a blind spot. In the blind spot is the 'concience' kernel, which
> > cannot be directly manipulated. It can only be programmed by the experiential
> > data input which it analyses for useful programming content. It colors this new
> > content by its existing meme set before it integrates the new content into its
> > accumulated database of 'dos' and 'donts'. The entire database gets to vote on
> > every decision, so new content cannot completely wipe out old content, except
> > under extremely stressful circumstances (i.e. HOT STUFF HURTS!).
> Inadequately specified cognitive elements. What is a "meme set"? What
> are "stressful circumstances"? When you say "it analyzes", what exactly
> is doing the analyzing, and how, and why do you trust it?
meme set: a set of memes
stressful circumstances: circumstances which cause any of the following:
it analyses: duh, pattern recognition, semantic analysing, etc. a unit which can derive memes from the flow of input data.
> >From http://www.tezcat.com/~eliezer/AI_design.temp.html#pre_prime
> The Rigidity Problem:
> "So?" you say. "Don't let the AIs reprogram the goal system."
> Leaving out the equivalence of the goal system and the reasoning
> system, leaving out the ubiquitous and reasoned connections between
> goal and action, leaving out the web of interdependencies that make
> goal behaviors a function of the entire AI...
I didn't say that. It can reprogram the goal system, but not directly since the goal system is part and parcel of the compiled program which is the self (i.e. why you can't edit a program which is currently running, unless it is programmed to edit itself) the programming is created by the analysis of the data input stream, and deriving memes from the data stream.