Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:
> In my mind, I conceptualize "AI" and "transhuman AI" as being two
> entirely different fields. AI is stuck in the cardboard box of
> insisting on generalized processing, simplified models, and implementing
> only a piece of one problem at a time instead of entire cognitive
> architectures; they're stuck on systems that a small team of researchers
> can implement in a year.
>
> I like to think that I actually appreciate the humongous complexity of
> the human mind - in terms of "lines" of neurocode, not content, but code
> - and that I've acknowledged the difficulty of the problem of creating a
> complete cognitive architecture. I suffer from absolutely no delusion
> that a transhuman AI will be small. In the end, I think the difference
> is that I've faced up to the problem.
This reminds me of the joke about the guy who says that he and his wife have an agreement to divide family responsibilities. "She makes the little decisions, and I make the big decisions." When asked for examples, he says, "She gets to decide where we'll go out on weekends, what we'll watch on TV, and whose family we'll spend holidays with. I get to decide who should be elected to the city council, what the President ought to be doing about the economy, and whether the Yankees should trade their best player for a bunch of rookies."
Hal