> For those following the IA vs. AI thread, consider the
> backlash now escalating against genetic engineering
> (bioenhanced food, cloning). And this hysterical backlash
> is happening despite the most ethical intentions and
> carefully crafted rhetoric of working biotechnologists.
I hate relying on human stupidity, but in this case I think we're fairly safe. Unless AI starts depriving people of jobs, or becomes available at the local supermarket, there isn't going to be a GM-type scare. Remember, there haven't been any instances of AIs going Frankenstein, and there probably never will be. That does make it harder to advance what-if scenarios of the type used by anti-GM advocates. You can burn a crop to the ground, you can rail against devil foods, but it's going to be hard to get your average villager angry at a computer program. It's actually a lot harder to get people excited over the end of the world then it is to get them excited about an evil hamburger.
The people who'd be most opposed to us will dismiss the entire thing as a figment of the imagination. In fact, I'd suggest deliberately planting memes among New Agers to the effect that transhumanists are stupid, misguided, comical, and harmless. When some Congresscritter gets up to make a speech about evil AIs taking over the world, we want half the floor laughing and the other half saying that he's buying into the whole evil transhumanist worldview by suggesting the possibility.
Or at least, I *would* suggest that, if it weren't lying. From an ethical standpoint, I would feel better if people took an interest in their own destiny, even if it was the wrong interest. It's your world too, humanity! If you believe AI is wrong, then stand up and fight! I'm tired of being the only one who cares!
In the end, humanity's strength of will and mind may be more important than whose side anyone is on. If there are going to be anti-AI arguments, then let's do everything we can to supply them with the factual information they need to develop those arguments; raise the level of debate so that facts win out. I'm not as idealistic - in the social sense - as I used to be, but I still like to think of myself as serving intelligence, and I would rather not deliberately encourage stupidity.
> Now imagine the public reaction when they discover the
> *real* agenda of most AI researchers who joyfully look
> forward to the day their creations make us all extinct!
When the public "discovers" our "real" agenda? I don't recall pretending for one second that my agenda was anything other than the reality, here or anywhere else. When I thought AIs would be benevolent, I said so. When I realized I didn't have the vaguest notion, I said so. When I realized it wasn't my job to care, I said so. From Moravec to Yudkowsky, we've been square with you. We have nothing to hide. And I think the public will appreciate that. Once you've admitted you're out to destroy Life As We Know It, anything your opponent accuses you of is going to be an anticlimax.
-- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way