Eliezer S. Yudkowsky wrote:
>SF, yes. I have, for example, read "Tik-Tok" and "Roderick", which you refer
>to later on. In both cases, the cognitive nature of the Asimov loss is never
>explained, and is usually guessed to be a manufacturing flaw. You don't need
>"Tik-Tok" for that; it's available in "Frankenstein".
Have you actually read "Frankenstein"? The flaw was in the manufacturer, not the monster!
> Greg Egan's
>"Quarantine" is the only really detailed look (that I've read) at a coercion
>which fails for fundamental reasons of cognitive science.
A result well known in psychology - rewards always work better, last longer.
>attention, except for the AI-oriented stuff. _Ancient_ philosophy, maybe, as
>long as I avoid fools like the arch-traitor Plato.
Know thy Enemy! Or live the fate of the Ostrich.
>I hear about them (3 Laws of Robotics) all the time.
>Not necessarily from AI workers, but Asimov
>has done a very good job of teaching everyone to think that AIs need
>to keep them in order. Now I not only have to worry about the AIers, I have
>to worry about the venture capitalists and the marketers and the managers.
Feed em all Chicken Soup, I say!
(Kosher Chickens Only)
>Incredibly funny books - but they are not cognitive science. Reading them,
>you'd think: "My God, look what happens when robots go unrestrained! We'd
>better make extra sure to slap more coercions on them." That the robots go
>insane _because_ of the coercions, not in spite of them, is never suggested.
Coercions make people insane too.
One remark: Coercion exists. It works. If it didn't, evolution would have eliminated it long ago. In many areas, coercion results in one trial learning. A rat which eats bad food learns to avoid the food ever after. The rat NEVER eats that food again. This has some advantages in some situations.
O--------------------------------O | Hara Ra <email@example.com> | | Box 8334 Santa Cruz, CA 95061 | | | | Death is for animals; | | immortality for gods. | | Technology is the means by |