[Fwd: AI big wins]

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 01 Oct 1998 20:08:17 -0500

This is a multi-part message in MIME format.

--------------75252378C0864762C407C53A
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Aargh. I see that this also went only to Hanson.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.
--------------75252378C0864762C407C53A
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Message-ID: <360EA8ED.6FBD4731@pobox.com>
Date: Sun, 27 Sep 1998 16:06:58 -0500
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
Reply-To: sentience@pobox.com
X-Mailer: Mozilla 4.06 (Macintosh; I; PPC)
MIME-Version: 1.0
To: Robin Hanson <hanson@econ.Berkeley.EDU>
Subject: Re: AI big wins
References: <3.0.3.32.19980908112136.010e05c8@econ.berkeley.edu>		
	 <3.0.3.32.19980910101936.00b29c90@econ.berkeley.edu>		
	 <3.0.3.32.19980923141050.0076cb9c@econ.berkeley.edu>		
	 <3.0.3.32.19980923161619.00772558@econ.berkeley.edu>		
	 <3.0.3.32.19980924095122.00724a18@econ.berkeley.edu>		
	 <3.0.3.32.19980924144910.0076f39c@econ.berkeley.edu>		
	 <3.0.3.32.19980924162002.007276b4@econ.berkeley.edu>		
	 <3.0.3.32.19980925093621.007295b0@econ.berkeley.edu>		
	 <3.0.3.32.19980925122941.0073d7f8@econ.berkeley.edu>
	 <3.0.3.32.19980925153422.00758d90@econ.berkeley.edu> <v03020902b233d639621f@[136.152.63.65]>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Robin Hanson wrote:

>
> >.... You are dealing with fragmentary
> >increments, assuming that the AI's time to complete any one task is the
> >occurrence that happens on our time scale. But I'm thinking in terms of a
> >series of AIs created by human programmers, and that the entire potential of a
> >given model of AI will be achieved in a run over the course of a few hours at
> >the most, or will bog down in a run that would take centuries to complete.
> >... In either case, the programmer (or more likely the Manhattan
> >Project) sighs, sits down, tries to fiddle with O or I and add abilities, and
> >tries running the AI again. ... But what matters is not
> >the level it starts at, but the succession of levels, and when you "zoom out"
> >to that perspective, the key steps are likely to be changes to the fundamental
> >architecture, not optimization.
>
> The same argument seems to apply at this broader level. The programmer has
> a list of ideas for fundamental architecture changes, which vary in how
> likely they are to succeed, how big a win they would be if they worked,
> and how much trouble they are to implement. The programmer naturally tries
> the best ideas first.
Ah, let me clarify: The succession of levels that the AI pushes itself through, not the succession of levels that the programmer tries. Once again, you have to look at a case where you're not just dealing with a list of ideas a single intelligence comes up with - be it AI or human - but a changing intelligence, and above all a self-altering intelligence. If you don't look at the self-alteration, you're not likely to discover any positive feedback. Each time the AI's intelligence jumps, it can come up with a new list. If it can't come up with a new list, then the intelligence hasn't jumped enough and the AI has bottlenecked. The prioritized lists probably behave like you said they would. It's the interaction between the lists of thoughts and the thinker that, when you zoom out, has the potential to result in explosive growth. -- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know. --------------75252378C0864762C407C53A--