Re: Special difficulties of AI terminology

From: Lee Daniel Crocker (
Date: Tue Feb 19 2002 - 12:31:33 MST

> (Eliezer S. Yudkowsky <>):
> Somewhere in an alternate universe, there is a version of Eliezer Yudkowsky
> who publishes "leisurely" scientific papers, and one of those papers is
> entitled "Special difficulties consequent to research on generally intelligent
> supersystems." I'll probably never get to write that paper. But I'd like to
> publish a small excerpt from that alternate universe.
> Right now I'm trying to use standard words, without neologism, and yet this
> involves a new problem: I'm rapidly running out of words. If you discuss
> generally intelligent supersystems, then not only will you need to overload an
> immense number of words, but furthermore they will be very abstract words,
> ones that would otherwise play a strong supporting role in abstract language.
> In my paper, I can no longer say "the past concept that children have no
> mathematical abilities"; I have to say "the past notion that children have no
> mathematical abilities". "Concept" has already been overloaded as a technical
> term. I have to use "notion" or "idea".
> Recently I wanted to discuss the second part of a trigger - you know, the
> thing that happens after the condition part is satisfied - and I found that I
> couldn't. "Action" was long since overloaded to describe a deliberate action
> of the AI. "Sequitur", which would have been appropriate in this case, had
> been used somewhere more important. I'd used up the space of causal
> terminology and the only words left didn't really describe what I wanted to
> say - "consequence" and "effect" and "outcome", for example, treat the action
> (sigh) as static, rather than as an active force. And at that I was lucky the
> paper didn't go into causality in more detail, or I would probably have been
> forced to use up "consequence", "effect", and "outcome" as well. At one
> point, faced with the prospect of being forced to overload the term "problem"
> to describe a virtual-environment challenge presented by the programmer to
> teach a concept or belief, I took the coward's way out. I called it a
> "microenvironmental challenge", abbreviating it as "MEC".
> So you can avoid neologism, but it's going to come at a cost; describing
> thought processes abstractly often consumes a term that we use in ordinary
> abstract thinking. Maybe when I'm done with the first draft, I'll be able to
> go back and reparse the language space more precisely and get rid of some of
> the current problems. For example, on reflection (and some
> thesaurus-hunting), I think that "microtask" may do to replace the neologistic
> "MEC". But don't be too quick to blame the creators of neologisms until
> you've walked a few miles in their thesauruses.
> -- -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

Why avoid neologism at all? When treating specific concepts within a
field, and especially within a specific paper, neologizing in much
preferred to overloading. Indeed, I might argue that creating new words
for new concepts is part and parcel of the cretive process itself. The
only thing that's bad is creating new words for old concepts that people
already understand--crackpots do that to hide the fact that they aren't
actually creating new ideas but just dressing up old ones. If your ideas
are truly new and different (or even just more specific in an important
way), then by all means makeup new terms. I wouldn't be able to teach
you about poker strategy without using a bunch of terms like "outs" and
"counterfeits" and "free cards" and "drawing dead". Why should I expect
you to be able to explain AI to me without first defining some new

Lee Daniel Crocker <> <>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC

This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:40 MST