>The goal of AIs is to create something substantially smarter than a
>human in all domains, from science to philosophy, so that there's no
>question of who judges the AI to be intelligent; the AI is a better
>judge than we are.
>The purpose of AI is to create something substantially smarter than
>human, bringing about the next step into the future - the first truly
>substantial step since the rise of the Cro-Magnons - and ending human
>history before it gets ugly (nanotechnological warfare, et cetera). We
>don't really know what comes after that, because the AIs are smarter
>than we are; if we knew what they'd do, we'd be that smart ourselves.
>But it's probably better than sticking with human minds until we manage
>to blow ourselves up.
I must admit that this puzzles me. If we create such a thing and always assume that it is the best judge in all situations, how do we know when it is mistaken? What happens if the AI decides, in its expanisve wisdom (or perhaps in one of its inevitable flaws), that the human race should not exist, and decides to pull the plug? Would you fight it? Or decide that since the AI is smarter than you, it must be right, and willingly lay down your life for the "greatest good"?