From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 31 2003 - 05:23:57 MDT
Robin Hanson wrote:
> Rafal Smigrodzki responded to Eliezer S. Yudkowsky re my paper:
>
>>>> Certainly a quite complex article. I think that what you quoted
>>>> above means that the Bayesian would treat the output of another
>>>> Bayesian as data of the same validity as the output of his own
>>>> reasoning. ... In effect, his beliefs are as valid an input for
>>>> your future reasoning as your own sensory ... subsystem outputs.
>>>
>>> Bear in mind that one should distinguish between *real*, *genuine*
>>> Bayesians like AIXI, and mere Bayesian wannabes like every
>>> physically realized being in our Universe.
>>>
>>> Bear in mind also that the above result holds only if you believe
>>> with absolute certainty (itself a very non-Bayesian thing) that the
>>> Bayesian's reasoning processes are perfect.
>>
>> ### But why? If I believe with some reasonable certainty that the
>> other Bayesian is a perfect as myself, and then some more (to account
>> for my lack of absolute certainty that he is what I think he is),
>> then I should still assign the same level of trustworthiness to his
>> beliefs as to mine.
>
> Let me echo Rafal; you should find their reasoning as useful as your
> own as long as they are as reliable as you. They need not be perfect.
Yes, correct. Sorry. I was thinking of a perfect Bayesian trying to use
the results of another Bayesian (who must therefore be perfect).
I do think you require certainty of honesty, though (or am I mistaken?).
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat May 31 2003 - 05:35:10 MDT