Re: reasoning under computational limitations

hal@rain.org
Sun, 28 Mar 1999 18:22:44 -0800

Wei Dai, <weidai@eskimo.com>, writes:
> Several people have answered my first question with "yes", and an implicit
> assumption in their reasoning is that before you know what your number is
> you should reason as if the 100!-th digit of PI is uniformly distributed.
> This assumption, along with the Self-Sampling Assumption is needed to
> reach the above conclusion. But of course the 100!-th digit of PI has a
> definite value and is not random. So how do we justify the use of the
> uniform distribution here?

I don't fully understand the issue here. Suppose I flip a coin but don't look at how it lands yet. I can assume a uniform distribution. But I can expend some effort, say, leaning over and looking at the floor, and learn what the coin is. At the cost of expending that effort I learn whether the coin fell heads or tails.

Can't computational uncertainty be treated the same way? I don't know the 21st digit of pi off hand, although I have memorized pi beyond that point. At the cost of some effort, I can recite pi in my head and learn the 21st digit.

In either case, until I take the action necessary to reduce my uncertainty, the best I can do is to make use of the basic information available. I know that the 21st digit of pi is a decimal digit from 0 through 9, and I know that the coin fell heads or tails. I assume the value is chosen uniformly from these distributions. When I expend the effort to learn more, I can reduce the uncertainty.

I guess that the problem comes into play when the effort needed to make deductions on the basis of uncertainty is larger than the effort needed to reduce the uncertainty. And with computational uncertainty this is especially worrisome, since the kind of effort involved in both cases is the same, namely calculation and reasoning. With the coin we can more easily draw a distinction between hard thought about heads and tails, vs. leaning over and peeking at the coin.

> And in general when faced with a computational
> problem that we can't solve, (1) does it always make sense to reason as if
> there is a probability distribution on the answer and (2) how do we come
> up with this distribution?

Maybe it makes sense to use a cost model where the cost of reasoning about uncertainty is included explicitly. You could then have a specified cost for reducing uncertainty, which would be especially easy to describe for computational uncertainty.

Hal