Re: Opinions as Evidence: Should Rational Bayesian Agents Commonize Priors

Date: Wed May 09 2001 - 16:00:19 MDT

Robin writes in or .ps...

: For example, consider two astronomers who disagree about whether the
: universe is open (and infinite) or closed (and finite). Assume that they
: are both aware of the relevant cosmological data, and consider themselves
: Bayesians, and so they want to attribute their differences of opinion
: to differing priors about the size of the universe. Assume that these
: astronomers also both accept that their beliefs are encoded in neuronal
: structures, that their initial beliefs were determined largely by the
: expression of genes inherited from their parents, and that those genes
: were the result of a long evolutionary selection process. Finally, let
: one of them believe that nature was equally likely to have switched the
: assignment of their priors, so that the person with the open-favoring
: prior instead had the closed-favoring prior and vice versa.

Counterfactuals are often trickier than they look. Hofstadter has some
fun with this in GEB, as when you drive through a swarm of bees and think
"good thing they weren't made of cement."

But they get even trickier when you try to imagine who "you" are in a
world with a different history. What if I were that poor orphan I see
in Bombay? Who was I in ancient Egypt?

I'm not sure these questions make sense. I can conceive in principle of
estimating a probability of a given objective world, one in which there
is an individual with particular characteristics (perhaps similar to mine)
as an orphan in Bombay, or living in ancient Egypt.

But I don't think it would make sense to say then that that person was
*me*, in the way we usually think of identity.

In the context of this paper, what does it mean to say that I might
have had different priors, or that your priors and mine might have been
reversed? To some extent, my priors *are* me. Or at least, they are
the seed which grows into me. Change these and the resulting person
would not be me. Maybe he'd be you, maybe he'd be a new person entirely.

I don't see how an agent can give a meaningful estimate of the probability
that *he* might have been given different priors. If you swap his
priors with someone else, you don't thereby give him different priors.
Identity goes with the priors, plus the experiences. If you swap the
priors but don't swap the experiences, you've created two new people.
If you swap priors + experiences, you've simply relabeled the existing
world and haven't really done anything meaningful.

No doubt other people have different philosophical views about the
nature of identity where swapping priors makes sense. But I think the
paper needs to address this issue head-on rather than talk of swapping
priors as though it were a philsophically unproblematic counterfactual.
(Of course, that's assuming that I didn't miss the point entirely.)


This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT