From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Mon Feb 18 2002 - 08:40:37 MST
Harvey Newstrom wrote:
> This is one problem I have had with your published material. When I have
> asked AI experts at IBM to look at your stuff, they have no idea what you
> are talking about. You have invented your own concepts and terminology from
> scratch and are not in-step with the rest of the AI community.
Harvey, I'm not sure the rest of the AI community *has* the terminology. I'm
currently working on a piece with references intended for more usual
publication. Hopefully that will be an easier read for the experts once a
draft is available. And I do use more standard terminology now than I did in
the old days, and define it better, or so I hope.
But some of the terminology just doesn't exist. And some, e.g. "LISP
symbols", is inherently awful and needs to be replaced, e.g. by "LISP
tokens". I'll send you a URL for the draft the moment it's available, but a
lot of AI is just *wrong* and this is something to be aware of.
Furthermore, the terminology I do use (especially recently) is drawn more from
the "brain science" end of cognitive science than the "computer science" end.
So even where I use standard terminology, such as "complex functional
adaptation", and cite a source, such as Tooby and Cosmides 1992, the average
AI guy may not recognize it (although the average brain science guy certainly
> this is not necessarily a bad thing, it does make it difficult to evaluate
> your work. Most of the experts who look at your website and published stuff
> just shrug their shoulders and walk away. I have not gotten any meaningful
> evaluation from anyone. If experts can't figure out what you are doing, I
> don't see how the lay-transhumanists can figure it out. I think that
> documenting and sharing your ideas is probably your biggest problem with
> getting funding or interest from other groups.
True. And I am working on this and will hopefully have at least a draft ready
shortly. But I am also obliged to consider questions of time, as you
mention. Certainly I don't think the field of AI will ever be convinced by
something short of a demonstration, and we have determined that SIAI's current
priority is to place the ideas in a form where a solid project could be
initiated immediately on funding being available - i.e., more internal design
documents, not more publications, not more evangelism. What I'm working on
currently is supposed to be a *brief* departure from those priorities. We'll
have to see whether it helps any.
In truth, I suspect that the new paper, howsoever conforming to the standards
of academia, will still be harder to read than plain old informal GISAI,
especially since my goal for this paper is a more complete explanation tied
into existing theory, rather than a simpler and more approachable explanation.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:39 MST