Eliezer S. Yudkowsky [SMTP:firstname.lastname@example.org] wrote:
> Do these guys have any idea how easy it is to fool an AI? Even a
> human-equivalent mind wouldn't have any of the defenses humans evolved
> over the generations, and a neural network is deterministically
> misleadable. While I strongly support the development of AI, I
> categorically oppose the use of any AI in social structures; it's just
> too easy to abuse or outright crack. Opaque AI like neural networks is
> even worse.
My understanding is that what they are after is something conceptually similar to the image-enhancement software the military uses. You feed all the data you can get your hands on into the program, and it points out non-obvious connections and/or makes guesses about which suspects (out of a group of thousands or more) you ought to be investigating. Either way the AI's output isn't considered evidence - its just a tool to help investigators sort through mountains of irrelevant data.