Eliezer S. Yudkowsky wrote,
> That said, their AI theory most certainly appears to suck, and I would put
> their chance of passing the Turing test in ten years at zero, unless the
> organization shifts to a completely new theory. My guess is that the
> incoherent babblings they've achieved are every bit as dumb as they
> appear, and not simple cases of something that could scale up to greater
> complexity.
I was disturbed by their examples, as well. They claimed that they were
nonsensical in the sense of a child's ramblings. That did not appear to me
to be the case. Children do not randomly make non sequitur. They play
games, have imaginary friends, and change topics frequently. But they do
not accidentally pull up the wrong information from their brain and spew it
out in answer to the wrong question. I also didn't understand the concept
of the computer wanting bananas on a trip. Unless this is an android that
simulates eating and has taste sensors, this is meaningless. How did the
computer "learn" to "like" bananas? If it can't eat, how does it "like"
them. This does not compare to a child who likes bananas. I found the
examples to be more counter-examples of AI. (Unfortunately!)
-- Harvey Newstrom <http://HarveyNewstrom.com> <http://Newstaff.com>
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:11 MDT