Re: AI done here cheap (was: Re: Luddites are everywhere!)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Mar 14 2000 - 21:01:45 MST


Damien Broderick wrote:
>
> At 06:22 PM 14/03/00 -0800, Robert B. wrote in passing:
>
> >way conservative on when we get human equivalent computing. The hard
> >part will be whether good AI takes 5 years or 20, but that's in Eliezer's,
> >Doug's and a few others hands.
>
> Hey, I love Eli like a son (not necessarily the most favorable comparison,
> if one reads his autobiographic sketch, but still...), admire the hell out
> of his smarts and articulateness, but let's not forget that so far all he's
> done is talk a good game.

Well, I'd like to point out, for the record, that so far I haven't
*claimed* to do anything more. If anyone gets confused over this, it's
not my fault. Besides, you can always read "Eliezer" as indicating
"someone filling the 'Eliezer' slot in the script", which is how I've
always read such statements.

> (Even that might be putting it a little strongly: the only formal AI expert
> on the list, Robin Hanson, has expressed misgivings over Eliezer's
> broad-brush and non-canonical algorithms/heuristics, and I haven't seen
> anyone cluey like Moravec or Vinge rush to adopt his schemata, although
> they both know about his site.)

Moravec knows about my site? Interesting. *I* didn't know that. Did
Moravec make any comments, or by "knows" do you mean "said he's heard
about it but not actually read anything", like Vinge? If the latter, it
hardly counts as input to the Bayesian probability theorem, if you take
my meaning.

> Sure, it's cheery to mention each other in such glowing terms, but it might
> be preferable to see some code that does something in the world, or to make
> the world jump up and down waving dollar bills, before we assume that one
> of our number is the new Knuth.

I understand the instinct to look for cheaters in a social situation,
but it seems to me that there's not much I can do about that. If I
wrote an AI tomorrow and it proved the Goldbach Conjecture on Thursday,
someone would still say I was a poseur. Unless writing an AI is going
to actually advance the state of the art - which, unaided, would
probably take a huge chunk of full-time work, and I can't do that unless
I get paid - then it's just an ego trip. _Coding a Transhuman AI_ isn't
working code, no, but the people *I'm* interested in can generally
perceive that it's different enough to be worth trying, and that's the
prerequisite for getting far enough to try.

How much progress do you expect, anyway? I said in _Coding_ that it'd
be a huge job. I said we'd have to dump most of AI and start over. It
took Drexler 12 years to go from the first paper to _Nanosystems_; it's
been only 1.5 years since I set the clock back. If I am everything that
I say I am, even everything that others have thought me to be, you still
wouldn't *expect* me to get any farther than I've gotten with no
funding, no support, and having to spend most of my time pushing the
idea far enough to set up a support organization. By the Bayesian
probability theorem, therefore, estimates of what I can do will have to
depend on your own ability to read _Coding_ et. al. and judge for
yourself the mind that wrote it. If Robert Bradbury thinks I,
specifically, am likely to play a part in the Singularity, and he wasn't
just using "Eliezer" in the sense of "some bright young unexpected
fellow with a plan", then that's his lookout.

Yes, I'm annoyed. It's all very well and good to remind the audience
that I haven't produced the code, but I think the audience also deserves
to be reminded that Drexler Himself couldn't produce the code in my
situation. Drexler talked a good game back in 1981, and he *still*
hasn't come up with any hardware, and you know what? It needed doing,
or Foresight and Great Mambo Chicken and the Singularitarian mailing
list wouldn't exist today. Twenty years later, Drexler's good game is
finally starting to look real, and it wouldn't have happened without
Drexler being willing to go out on a limb and get shot at by people who
thought that he was just talking a good game. Talking a good game -
being willing to get out there with no backup and no portfolio and talk
about changing the world - is part of the process of getting into a
position to produce the goods and change the world, and I will not
apologize for any of it.

-- 
       sentience@pobox.com      Eliezer S. Yudkowsky
          http://pobox.com/~sentience/beyond.html
                 Member, Extropy Institute
           Senior Associate, Foresight Institute



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:14 MDT