>A moralist who designs a AI and gives the investigation
>of this problem priority over everything else will send the machine into a
>To make maters worse, you may not even be able to prove it's futile, that
>is either false or true but unprovable, so I don't think it would be wise
>a AI to keep working on any problem until an answer is found.
In order for this to be possible, the problem would have to be
An arbitrary subjective animal concept will not be communicable to an AI, unless you program the acceptance of facts without premises....and then what have you achieved?