Re: AI done here cheap (was: Re: Luddites are everywhere!)

From: Aaron Davidson (davidson@cs.ualberta.ca)
Date: Wed Mar 22 2000 - 00:04:07 MST


Damien Broderick wrote:

>Hey, I love Eli like a son (not necessarily the most favorable
>comparison, if one reads his autobiographic sketch, but still...),
>admire the hell out of his smarts and articulateness, but let's not
>forget that so far all he's done is talk a good game.
>
>(Even that might be putting it a little strongly: the only formal AI
>expert on the list, Robin Hanson, has expressed misgivings over
>Eliezer's broad-brush and non-canonical algorithms/heuristics, and
>I haven't seen anyone cluey like Moravec or Vinge rush to adopt his
>schemata, although they both know about his site.)

I'm going to delurk for a moment to give my 2-cents. I'm an AI
researcher (starting my masters in AI) and I've had a good look at
Eliezer's written work.

Yes, it is a lot of talk, and yes there are plenty of problems with
it. Much of it being very high level -- it's like trying to make the
whole enchilada when we still don't know how to refry beans or make
tortillas.

*However*, to Eliezer's credit -- of *course* it will have problems
at this stage. He is trying to tackle the "hard problem" of AI for
bob's sake. There is little out there that rivals his work.
Researchers simply can't publish work like this in the social setting
science is practiced in these days.

CaThAI really is impressive -- Eliezer is definately a smart cookie.
I sure could not have put so comprehensive a document together. My
prediction is that as AI continues to progress and Eileizer continues
to learn more, nearly all of his plans will need rewriting, and
refinement. There's too much there that it will all turn out to be
correct. I'm sure Eliezer knows this. But he will be there with
newer, more powerful ideas to replace his old ones.

For all its shortcomings, which really can not be avoided, I have yet
to see a better, or more organized plan to accomplish the same goal.
You have to start somewhere. This is why I am a Singularitarian. The
Singularity Institute should be considered for funding, and it should
be staffed with lots of non-conformist AI researchers, like Yudkowsky.

 From comments on this list, I get the impression that many list
members imagine the "mad scientist" scenario where Eliezer locks
himself in his room for 5 years coding feverishly night after night
until *WHAMMO* his seed AI takes root and blossoms into a
Singularity. Not likely. As Eliezer has stated, this is a massive
undertaking. It will take lots of money, lots of time, lots of
computing power, and lots of brilliant people.

This is just the beginning folks. You ain't seen nuthin' yet.

I wish I could post more but I have to get back to working on AI ;-)

                  --- May You Live In Exciting Times ---

-- 
+-------------------------------------------------------------------------+
| Aaron Davidson  <ajd@ualberta.ca>  http://www.cs.ualberta.ca/~davidson/ |
+-------------------------------------------------------------------------+



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:06:04 MDT