Hara Ra wrote:
> I suspect that successful SIs who are still interfacing with humans will
convey attributes and
> messages which elicit human trust, just like petting the dog conveys to
the dog that you are
> friendly and trustworthy. And, just as we don't discuss the Extropian
Principles with our pets, I
> doubt that the SIs will bother us with things they know we can't
I fail to see how the demeanor of SIs is relevant to the issue identified in the original post. The issue is whether we can determine Strong SIs will ever exist. There are two scenarios. Either Strong SIs will exist but we won't be able to recognize them as such due to our own limitations or Strong SIs will not exist because all SIs will only be Weak SIs. If we can not grasp the intellectual artifacts of a SI then it might simply be extremely advanced in its intellectual accomplishments. Maybe it is 10,000 years ahead of human intelligence. But that does not make it a Strong SI, its a Weak SI. We won't know the difference until we can begin to grasp the concepts the SI is thinking about, unless of course we could somehow determine the existence of knowledge that is forever beyond our grasp (highly improbable).
I am uncomfortable with the idea of Strong SI. This anxiety is not due to anthropic concerns, Weak SIs will succeed quite nicely at humbling human intellects. I do not see how we can speculate on the existence of knowledge that transcends the definition of knowledge. I've seen no rational underpinnings described to substantiate the speculation that such superknowledge might exist or be developed.