Friday, 6 December 2019

Deterministic updating and the symmetry argument for Conditionalization

According to the Bayesian, when I learn a proposition to which I assign a positive credence, I should update my credences so that my new unconditional credence in a proposition is my old conditional credence in that proposition conditional on the proposition I learned. Thus, if $c$ is my credence function before I learn $E$, and $c'$ is my credence function afterwards, and $c(E) > 0$, then it ought to be the case that $$c'(-) = c(-|E) := \frac{c(-\ \&\ E)}{c(E)}$$ There are many arguments for this Bayesian norm of updating. Some pay attention to the pragmatic costs of updating any other way (Brown 1976; Lewis 1999); some pay attention to the epistemic costs, which are spelled out in terms of the accuracy of the credences that result from the updating plans (Greaves & Wallace 2006; Briggs & Pettigrew 2018); others show that updating as the Bayesian requires, and only updating in that way, preserves as much as possible about the prior credences while still respecting the new evidence (Diaconis & Zabell 1982; Dietrich, List, and Bradley 2016). And then there are the symmetry arguments that are our focus here (Hughes & van Fraassen 1985; van Fraassen 1987; Grove & Halpern 1998).

In a recent paper, I argued that the pragmatic and epistemic arguments for Bayesian updating are based on an unwarranted assumption, which I called Deterministic Updating. An updating plan says how you'll update in response to a specific piece of evidence. Such a plan is deterministic if there's a single credence function that it says you'll adopt in response to that evidence, rather than a range of different credence functions that you might adopt in response. Deterministic Updating says that your updating plan for a particular piece of evidence should be deterministic. That is, if $E$ is a proposition you might learn, your plan for responding to receiving $E$ as evidence should take the form:
  • If I learn $E$, I'll adopt $c'$ 
rather than the form:
  • If I learn $E$, I might adopt $c'$, I might adopt $c^+$, and I might adopt $c^*$.
Here, I want to show that the symmetry arguments make the same assumption.
Let's start by laying out the symmetry argument. Suppose $W$ is a set of possible worlds, and $F$ is an algebra over $W$. Then an updating plan on $M = (W, F)$ is a function $U^M$ that takes a credence function $P$ defined on $F$ and a proposition $E$ in $F$ and returns the set of credence functions that the updating plan endorses as responses to learning $E$ for those with credence function $P$. Then we impose three conditions on a family of updating plans $U$.

Deterministic Updating This says that an updating plan should endorse at most one credence function as a response to learning a given piece of evidence. That is, for any $M = (W, F)$ and $E$ in $F$, $U^M$ endorses at most one credence function as a response to learning $E$. That is, $|U^M(P, E)| \leq 1$ for all $P$ on $F$ and $E$ in $F$.

Certainty This says that any credence function that an updating plan endorses as a response to learning $E$ must be certain of $E$. That is, for any $M = (W, F)$, $P$ on $F$ and $E$ in $F$, if $P'$ is in $U^M(P, E)$, then $P'(E) = 1$.

Symmetry This condition requires a bit more work to spell out. Very roughly, it says that the way that an updating plan would have you update should not be sensitive to the way the possibilities are represented. More precisely: Let $M = (W, F)$ and $M' = (W', F')$. Suppose $f : W \rightarrow W'$ is a surjective function. That is, for each $w'$ in $W'$, there is $w$ in $W$ such that $f(w) = w'$. And suppose for each $X$ in $F'$, $f^{-1}(X) = \{w \in W | f(w) \in X\}$ is in $F$. Then the worlds in $W'$ are coarse-grained versions of the worlds in $W$, and the propositions in $F'$ are coarse-grained versions of those in $F$. Now, given a credence function $P$ on $F$, let $f(P)$ be the credence function over $F'$ such that $f(P)(X) = P(f^{-1}(X))$. Then the credence functions that result from updating $f(P)$ by $E'$ in $F'$ using $U^{M'}$ are the image under $f$ of the credence functions that result from updating $P$ on $f^{-1}(E')$ using $U^M$. That is, $U^{M'}(f(P), E') = f(U^M(P, f^{-1}(E')))$.

Now, van Fraassen proves the following theorem, though he doesn't phrase it like this because he assumes Deterministic Updating in his definition of an updating rule:

Theorem (van Fraassen) If $U$ satisfies Deterministic Updating, Certainty, and Symmetry, then $U$ is the conditionalization updating plan. That is, if $M = (W, F)$, $P$ is defined on $F$ and $E$ is in $F$ with $P(E) > 0$, then $U^M(P, E)$ contains only one credence function $P'$ and $P'(-) = P(-|E)$.

The problem is that, while Certainty is entirely uncontroversial and Symmetry is very plausible, there is no particularly good reason to assume Deterministic Updating. But the argument cannot go through without it. To see this, consider the following updating rule:
  • If $0 < P(E) < 1$, then $V^M(P, E) = \{v_w | w \in W\ \&\ w \in E\}$, where $v_w$ is the credence function on $F$ such that $v_w(X) = 1$ if $w$ is in $X$, and $v_w(X) = 0$ is $w$ is not in $X$ ($v_w$ is sometimes called the valuation function for $w$, or the omniscience credence function at $w).
  • If $P(E) = 1$, then $V^M(P, E) = P$.
That is, if $P$ is not already certain of $E$, then $V^M$ takes any credence function on $F$ and any proposition in $F$ and returns the set of valuation functions for the worlds in $W$ at which that proposition is true. Otherwise, it keeps $P$ unchanged.

It is easy to see that $V$ satisfies Certainty, since $v_w(E) = 1$ for each $w$ in $E$. To see that $V$ satisfies Symmetry, the crucial fact is that $f(v_w) = v_{f(w)}$. First, take a credence function in $V^{M'}(f(P), E')$: that is, $v_{w'}$ for some $w'$ in $E'$. Then $f^{-1}(w')$ is in $f^{-1}(E')$ and so $v_{f^{-1}(w')}$ is in $V^M(P, f^{-1}(E')))$. And $f(v_{f^{-1}(w')}) = v_{w'}$, so $v_{w'}$ is in $f(V^M(P, f^{-1}(E')))$. Next, take a credence function in $f(V^M(P, f^{-1}(E')))$. That is, $f(v_w)$ for some $w$ in $f^{-1}(E')$. Then $f(v_w) = v_{f(w)}$ and thus $f(v_w)$ is in $V^{M'}(f(P), E')$, as required.

So $V$ satisfies Certainty and Symmetry, but it is not the Bayesian updating rule.

Now, perhaps there is some further desirable condition that $V$ fails to meet? Perhaps. And it's difficult to prove a negative existential claim. But one thing we can do is to note that $V$ satisfies all the conditions on updating plans on sets of probabilities that Grove & Halpern explore as they try to extend van Fraassen's argument from the case of precise credences to the case of imprecise credences. All, that is, except Deterministic Updating, which they also impose. Here they are:

Order Invariance This says that updating first on $E$ and then on $E \cap F$ should result in the same posteriors as updating first on $F$ and then on $E \cap F$. This holds because, either way, you end up with $$U^M(P, E \cap F) = \{v_w : w \in W\ \&\ w \in E \cap F\}$$.

Stationarity This says that updating on $E$ should have no effect if you are already certain of $E$. That is, if $P(E) = 1$, then $U^M(P, E) = P$. The second clause of our definition of $V$ ensures this.

Non-Triviality This says that there's some prior that is less than certain of the evidence such that updating it on the evidence leads to some posteriors that the updating plan endorses. That is, for some $M = (W, F)$, some $P$ on $F$, and some $E$ in $F$, $U^M(P, E) \neq \emptyset$. Indeed, $V$ will satisfy this for any $P$ and any $E \neq \emptyset$.

So, in sum, it seems that van Fraassen's symmetry argument for Bayesian updating shares the same flaw as the pragmatic and epistemic arguments, namely, they rely on Deterministic Updating, and yet that assumption is unwarranted.

References

  1. Briggs, R. A., & Pettigrew, R. (2018). An accuracy-dominance argument for conditionalization. Noûs.  https://doi.org/10.1111/nous.12258
  2. Brown, P. M. (1976). Conditionalization and expected utility. Philosophy of Science, 43(3), 415–419.
  3. Diaconis, P., & Zabell, S. L. (1982). Updating subjective probability. Journal of the American Statistical Association, 77(380), 822–830.
  4. Dietrich, F., List, C., & Bradley, R. (2016). Belief revision generalized: A joint characterization of Bayes’s and Jeffrey’s rules. Journal of Economic Theory, 162, 352–371.
  5. Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.
  6. Grove, A. J., & Halpern, J. Y. (1998). Updating sets of probabilities. In Proceedings of the 14th conference on uncertainty in AI (pp. 173–182). San Francisco, CA: Morgan Kaufman.
  7. Lewis, D. (1999). Why conditionalize? Papers in metaphysics and epistemology (pp. 403–407). Cambridge: Cambridge University Press.





Thursday, 27 June 2019

CFP (Formal Philosophy, Gdansk)


The International Conference for Philosophy of Science and Formal Methods in Philosophy (CoPS-FaM-19) of the Polish Association for Logic and Philosophy of Science will take place on December 4-6, 2019 at the University of Gdansk (in cooperation with the University of Warsaw). Extended abstract submission: August 31, 2019.

*Keynote speakers*
Hitoshi Omori (Ruhr-Universität Bochum)
Oystein Linnebo (University of Oslo)
Miriam Schoenfield (MIT)
Stanislav Speransky (St. Petersburg State University)
Katya Tentori (University of Trento)

Full submission details available at:
http://lopsegdansk.blogspot.com/p/cops-fam-19-cfp.html


*Programme Committee*
Patrick Blackburn (University of Roskilde)
Cezary Cieśliński (University of Warsaw)
Matteo Colombo (Tilburg University)
Juliusz Doboszewski (Harvard University)
David Fernandez Duque (Ghent University)
Benjamin Eva (University of Konstanz)
Benedict Eastaugh (LMU Munich)
Federico Faroldi (Ghent University)
Michał Tomasz Godziszewski (University of Warsaw)
Valentin Goranko (Stockholm University)
Rafał Gruszczyński (Nicolaus Copernicus University)
Alexandre Guay (University of Louvain)
Zalan Gyenis (Jagiellonian University)
Ronnie Hermens (Utrecht University)
Leon Horsten (University of Bristol)
Johannes Korbmacher (Utrecht University)
Louwe B. Kuijer (University of Liverpool)
Juergen Landes (LMU Munich)
Marianna Antonnutti Marfori (LMU Munich)
Frederik Van De Putte (Ghent University)
Jan-Willem Romeijn (University of Groningen)
Sonja Smets (University of Amsterdam)
Anthia Solaki (University of Amsterdam)
Jan Sprenger (University of Turin)
Stanislav Speransky (St. Petersburg State University)
Tom F. Sterkenburg (LMU Munich)
Johannes Stern (University of Bristol)
Allard Tamminga (University of Groningen)
Mariusz Urbański (Adam Mickiewicz University)
Erik Weber (Ghent University)
Leszek Wroński (Jagiellonian University)

*Local Organizing Committee:*
Rafal Urbaniak
Patryk Dziurosz-Serafinowicz
Pavel Janda
Pawel Pawlowski
Paula Quinon
Weronika Majek
Przemek Przepiórka
Małgorzata Stefaniak

Friday, 17 May 2019

What is conditionalization and why should we do it?

The three central tenets of traditional Bayesian epistemology are these:

Precision Your doxastic state at a given time is represented by a credence function, $c$, which takes each proposition $X$ about which you have an opinion and returns a single numerical value, $c(X)$, that measures the strength of your belief in $X$. By convention, we let $0$ represent your minimal credence and we let $1$ represent your maximal credence.

Probabilism Your credence function should be a probability function. That is, you should assign minimal credence (i.e. 0) to necessarily false propositions, maximal credence (i.e. 1) to necessarily true propositions, and your credence in the disjunction of two propositions whose conjunction is necessarily false should be the sum of your credences in the disjuncts.

Conditionalization You should update your credences by conditionalizing on your total evidence.

Note: Precision sets out the way in which doxastic states will be represented; Probabilism and Conditionalization are norms that are stated using that representation.

Here, we will assume Precision and Probabilism and focus on Conditionalization. In particular, we are interested in what exactly the norm says; and, more specifically, which versions of the norm are supported by the standard arguments in its favour. That is, we are interested in what versions of the norm we can justify using the existing arguments. We will consider three versions of the norm; and we will consider four arguments in its favour. For each combination, we'll ask whether the argument can support the norm. In each case, we'll notice that the standard formulation relies on a particular assumption, which we call Deterministic Updating and which we formulate precisely below. We'll ask whether the argument really does rely on this assumption, or whether it can be amended to support the norm without that assumption. Let's meet the interpretations and the arguments informally now; then we'll be ready to dive into the details.

Here are the three interpretations of Conditionalization. According to the first, Actual Conditionalization, Conditionalization governs your actual updating behaviour.

Actual Conditionalization (AC)

If
  • $c$ is your credence function at $t$ (we'll often refer to this as your prior);
  • the total evidence you receive between $t$ and $t'$ comes in the form of a proposition $E$ learned with certainty;
  • $c(E) > 0$;
  • $c'$ is your credence function at the later time $t'$ (we'll often refer to this as your posterior);
then it should be the case that $c'(-) = c(-|E) = \frac{c(-\ \&\ E)}{c(E)}$.
According to the second, Plan Conditionalization, Conditionalization governs the updating behaviour you would endorse in all possible evidential situations you might face:

Plan Conditionalization (PC)

If
  • $c$ is your credence function at $t$;
  • the total evidence you receive between $t$ and $t'$ will come in the form of a proposition learned with certainty, and that proposition will come from the partition $\mathcal{E} = \{E_1, \ldots, E_n\}$;
  • $R$ is the plan you endorse for how to update in response to each possible piece of total evidence,
then it should be the case that, if you were to receive evidence $E_i$ and if $c(E_i) > 0$, then $R$ would exhort you to adopt credence function $c_i(-) = c(-|E_i) = \frac{c(-\ \&\ E_i)}{c(E_i)}$.

According to the third, Dispositional Conditionalization, Conditionalization governs the updating behaviour you are disposed to exhibit.
 
Dispositional Conditionalization (DC)

If
  • $c$ is your credence function at $t$;
  • the total evidence you receive between $t$ and $t'$ will come in the form of a proposition learned with certainty, and that proposition will come from the partition $\mathcal{E} = \{E_1, \ldots, E_n\}$;
  • $R$ is the plan you are disposed to follow in response to each possible piece of total evidence,
then it should be the case that, if you were to receive evidence $E_i$ and if $c(E_i) > 0$, then $R$ would exhort you to adopt credence function $c_i(-) = c(-|E_i) = \frac{c(-\ \&\ E_i)}{c(E_i)}$.

Next, let's meet the four arguments. Since it will take some work to formulate them precisely, I will give only an informal gloss here. There will be plenty of time to see them in high-definition in what follows.

Diachronic Dutch Book or Dutch Strategy Argument (DSA) This purports to show that, if you violate conditionalization, there is a pair of decisions you might face, one before and one after you receive your evidence, such that your prior and posterior credences lead you to choose options when faced with those decisions that are guaranteed to be worse by your own lights than some alternative options (Lewis 1999).

Expected Pragmatic Utility Argument (EPUA) This purports to show that, if you will face a decision after learning your evidence, then your prior credences will expect your updated posterior credences to do the best job of making that decision if they are obtained by conditionalizing on your priors (Brown 1976).

Expected Epistemic Utility Argument (EEUA) This purports to show that your prior credences will expect your posterior credences to be best epistemically speaking if they are obtained by conditionalizing on your priors (Greaves & Wallace 2006).

Epistemic Utility Dominance Argument (EUDA) This purports to show that, if you violate conditionalization, then there will be alternative priors and posteriors  that are guaranteed to be better epistemically speaking, when considered together, than your priors and posteriors (Briggs & Pettigrew 2018).

The framework


In the following sections, we will consider each of the arguments listed above. As we will see, these arguments are concerned directly with updating plans or dispositions, rather than actual updating behaviour. That is, the items that they consider don't just specify how you in fact update in response to the particular piece of evidence you actually receive. Rather, they assume that your evidence between the earlier and later time will come in the form of a proposition learned with certainty (Certain Evidence); they assume the possible propositions that you might learn with certainty by the later time form a partition (Evidential Partition); and they assume that each of the propositions you might learn with certainty is one about which you had a prior opinion (Evidential Availability); and then they specify, for each of the possible pieces of evidence in your evidential partition, how you might update if you were to receive it.

Some philosophers, like David Lewis (1999), assume that all three assumptions---Certain Evidence, Evidential Partition, Evidential Availability---hold in all learning situations. Others, deny one or more. So Richard Jeffrey (1992) denies Certain Evidence and Evidential Availability; Jason Konek (2019) denies Evidential Availability but not Certain Evidence; Bas van Fraassen (1999), Miriam Schoenfield (2017), and Jonathan Weisberg (2007) deny Evidential Partition. But all agree, I think, that there are certain important situations when all three assumptions are true; there are certain situations where there is a set of propositions that forms a partition and about each member of which you have a prior opinion, and the possible evidence you might receive at the later time comes in the form of one of these propositions learned with certainty. Examples might include: when you are about to discover the outcome of a scientific experiment, perhaps by taking a reading from a measuring device with unambiguous outputs; when you've asked an expert a yes/no question; when you step on the digital scales in your bathroom or check your bank balance or count the number of spots on the back of the ladybird that just landed on your hand. So, if you disagree with Lewis, simply restrict your attention to these cases in what follows.

As we will see, we can piggyback on conclusions about plans and dispositions to produce arguments about actual behaviour in certain situations. But in the first instance, we will take the arguments to address plans and dispositions defined on evidential partitions primarily, and actual behaviour only secondarily. Thus, to state these arguments, we need a clear way to represent updating plans or dispositions. We will talk neutrally here of an updating rule. If you think conditionalization governs your updating dispositions, then you take it to govern the updating rule that matches those dispositions; if you think it governs your updating intentions, then you take it to govern the updating rule you intend to follow.

We'll introduce a slew of terminology here. You needn't take it all in at the moment, but it's worth keeping it all in one place for ease of reference.

Agenda  We will assume that your prior and posterior credence functions are defined on the same set of propositions $\mathcal{F}$, and we'll assume that $\mathcal{F}$ is finite and $\mathcal{F}$ is an algebra. We say that $\mathcal{F}$ is your agenda.

Possible worlds  Given an agenda $\mathcal{F}$, the set of possible worlds relative to $\mathcal{F}$ is the set of classically consistent assignments of truth values to the propositions in $\mathcal{F}$. We'll abuse notation throughout and write $w$ for (i) a truth value assignment to the propositions in $\mathcal{F}$, (ii) the proposition in $\mathcal{F}$ that is true at that truth value assignment and only at that truth value assignment, and (iii) what we might call the omniscient credence function relative to that truth value assignment, which is the credence function that assigns maximal credence (i.e. 1) to all propositions that are true on it and minimal credence (i.e. 0) to all propositions that are false on it.

Updating rules An updating rule has two components:
  • a set of propositions, $\mathcal{E} = \{E_1, \ldots, E_n\}$. This contains the propositions that you might learn with certainty at the later time $t'$; each $E_i$ is in $\mathcal{F}$, so $\mathcal{E} \subseteq \mathcal{F}$; $\mathcal{E}$ forms a partition;
  • a set of sets of credence functions, $\mathcal{C} = \{C_1, \ldots, C_n\}$. For each $E_i$, $C_i$ is the set of possible ways that the rule allows you to respond to evidence $E_i$; that is, it is the set of possible posteriors that the rule permits when you learn $E_i$; each $c'$ in $C_i$ in $\mathcal{C}$ is defined on $\mathcal{F}$.

Deterministic updating rule We say that an updating rule $R = (\mathcal{E}, \mathcal{C})$ is deterministic if each $C_i$ is a singleton set $\{c_i\}$. That is, for each piece of evidence there is exactly one possible response to it that the rule allows.

Stochastic updating rule  A stochastic updating rule is an updating rule $R = (\mathcal{C}, \mathcal{E})$ equipped with a probability function $P$. $P$ records, for each $E_i$ in $\mathcal{E}$ and $c'$ in $C_i$, how likely it is that I will adopt $c'$ in response to learning $E_i$. We write this $P(R^i_{c'} | E_i)$, where $R^i_{c'}$ is the proposition that says that you adopt posterior $c'$ in response to evidence $E_i$.
  • We assume $P(R^i_{c'} | E_i) > 0$ for all $c'$ in $C_i$. If the probability that you will adopt $c'$ in response to $E_i$ is zero, then $c'$ does not count as a response to $E_i$ that the rule allows.
  • Note that every deterministic updating rule is a stochastic updating rule for which $P(R^i_{c'} | E_i) = 1$ for each $c'$ in $C_i$. If $R = (\mathcal{E}, \mathcal{C})$ is deterministic, then, for each $E_i$, $C_i = \{c_i\}$. So let $P(R^i_{c_i} | E_i) = 1$.

Conditionalizing updating rule An updating rule $R = (\mathcal{E}, \mathcal{C})$ is a conditionalizing rule for a prior $c$ if, whenever $c(E_i) > 0$, $C_i = \{c_i\}$ and $c_i(-) = c(-|E_i)$.

Conditionalizing pairs  A pair $\langle c, R \rangle$ of a prior and an updating rule is a conditionalizing pair if $R$ is a conditionalizing rule for $c$.

Pseudo-conditionalizing updating rule Suppose $R = (\mathcal{E}, \mathcal{C})$ is an updating rule. Then let $\mathcal{F}^*$ be the smallest algebra that contains all of $\mathcal{F}$ and also $R^i_{c'}$ for each $E_i$ in $\mathcal{E}$ and $c'$ in $C_i$. (As above $R^i_{c'}$ is the proposition that says that you adopt posterior $c'$ in response to evidence $E_i$.) Then an updating rule $R$ is a pseudo-conditionalizing rule for a prior $c$ if it is possible to extend $c$, a credence function defined on $\mathcal{F}$, to $c^*$, a credence function defined on $\mathcal{F}^*$, such that, for each $E_i$ in $\mathcal{E}$ and $c'$ in $C_i$, $c'(-) = c^*(-|R^i_{c'})$. That is, each posterior is the result of conditionalizing the extended prior $c^*$ on the evidence to which it is a response and the fact that it was your response to this evidence.

Pseudo-conditionalizing pair A pair $\langle c, R \rangle$ of a prior and an updating rule is a pseudo-conditionalizing pair if $R$ is a pseudo-conditionalizing rule for $c$.

Let's illustrate these definitions using an example. Condi is a meteorologist. There is a hurricane in the Gulf of Mexico. She knows that it will make landfall soon in one of the following four towns: Pensecola, FL, Panama City, FL, Mobile, AL, Biloxi, MS. She calls a friend and asks whether it has hit yet. It has. Then she asks whether it has hit in Florida. At this point, the evidence she will receive when her friend answers is either $F$---which says that it made landfall in Florida, that is, in Pensecola or Panama City---or $\overline{F}$---which says it hit elsewhere, that is, in Mobile or Biloxi. Her prior is $c$:

Her evidential partition is $\mathcal{E} = \{F, \overline{F}\}$. And here are some posteriors she might adopt:



And here are four possible rules she might adopt, along with their properties:


As we will see below, for each of our four arguments for conditionalization---DSA, EPUA, EEUA, and EUDA---the standard formulation of the argument assumes a norm that we will call Deterministic Updating:

Deterministic Updating (DU) Your updating rule should be deterministic.

As we will see, this is crucial for the success of these arguments. In what follows, I will present each argument in its standard formulation, which assumes Deterministic Updating. Then I will explore what happens when we remove that assumption.

The Dutch Strategy Argument (DSA)


The DSA and EPUA both evaluate updating rules by their pragmatic consequences. That is, they look to the choices that your priors and/or your possible posteriors lead you to make and they conclude that they are optimal only if your updating rule is a conditionalizing rule for your prior.

DSA with Deterministic Updating


Let's look at the DSA first. In what follows, we'll take a decision problem to be a set of options that are available to an agent: e.g. accept a particular bet or refuse it; buy a particular lottery ticket or don't; take an umbrella when you go outside, take a raincoat, or take neither; and so on. The idea behind the DSA is this. One of the roles of credences is to help us make choices when faced with decision problems. They play that role badly if they lead us to make one series of choices when another series is guaranteed to serve our ends better. The DSA turns on the claim that, unless we update in line with Conditionalization, our credences will lead us to make such a series of choices when faced with a particular series of decision problems.

Here, we restrict attention to a particular class of decision problems you might face. They are the decision problems in which, for each available option, its outcome at a given possible world obtains for you a certain amount of a particular quantity, such as money or chocolate or pure pleasure, and your utility is linear in that quantity---that is, obtaining some amount of that quantity increases your utility by the same amount regardless of how much of the quantity you already have. The quantity is typically taken to be money, and we'll continue to talk like that in what follows. But it's really a placeholder for some quantity with this property. We restrict attention to such decision problems because, in the argument, we need to combine the outcome of one decision, made at the earlier time, with the outcome of another decision, made at the later time. So we need to ensure that the utility of a combination of outcomes is the sum of the utilities of the individual outcomes.

Now, as we do throughout, we assume that the prior $c$ and the possible posteriors $c_1, \ldots, c_n$ permitted by a deterministic updating rule $R$ are all probability functions. And we will assume further that, when your credences are probabilistic, and you face a decision problem, then you should choose from the available options one of those that maximises expected utility relative to your credences.

With this in hand, let's define two closely related features of a pair $\langle c, R \rangle$ that are undesirable from a pragmatic point of view, and might be thought to render that pair irrational. First:

Strong Dutch Strategies  $\langle c, R \rangle$ is vulnerable to a strong Dutch strategy if there are two decision problems, $\mathbf{d}$, $\mathbf{d}'$ such that
  1. $c$ requires you to choose option $A$ from the possible options available in $\mathbf{d}$;
  2. for each $E_i$ and each $c'$ in $C_i$, $c'$ requires you to choose $B$ from $\mathbf{d}'$;
  3. there are alternative options, $X$ in $\mathbf{d}$ and $Y$ in $\mathbf{d}'$, such that, at every possible world, you'll receive more utility from choosing $X$ and $Y$ than you receive from choosing $A$ and $B$. In the language of decision theory, $X + Y$ strongly dominates $A + B$.
Weak Dutch Strategies  $\langle c, R \rangle$ is vulnerable to a weak Dutch strategy if there are decision problems $\mathbf{d}$ and, for each $c'$ in $C_i$ in $\mathcal{C}$, $\mathbf{d}_{c'}$ such that
  1. $c$ requires you to choose $A$ from $\mathbf{d}$;
  2. for each $E_i$ and each $c'$ in $C_i$, $c'$ requires you to choose $B^i_{c'}$ from $\mathbf{d}'_{c'}$;
  3. there are alternative options, $X$ in $\mathbf{d}$ and, for $E_i$ and $c'$ in $C_i$, $Y^i_{c'}$ in $\mathbf{d}'_{c'}$, such that (a) for each $E_i$, each world in $E_i$, and each $c'$ in $C_i$, you'll receive at least as much utility at that world from choosing $X$ and $Y^i_{c'}$ as you'll receive from choosing $A$ and $B^i_{c'}$, and (b) for some $E_i$, some world in $E_i$, and some $c'$ in $C_i$, you'll receive strictly more utility at that world from $X$ and $Y^i_{c'}$ than you'll receive from $A$ and $B^i_{c'}$.
Then the Dutch Strategy Argument is based on the following mathematical fact (de Finetti 1974):

Theorem 1 Suppose $R$ is a deterministic updating rule. Then:
  1. if $R$ is not a conditionalizing pair for $c$, then $\langle c, R \rangle$ is vulnerable to a strong Dutch strategy;
  2. if $R$ is a conditionalizing rule for $c$, then $\langle c, R \rangle$ is not vulnerable even to a weak Dutch strategy.
That is, if your updating rule is not a conditionalizing rule for your prior, then your credences will lead you to choose a strongly dominated pair of options when faced with a particular pair of decision problems; if you satisfy it, that can't happen.

Now that we have seen how the argument works, let's see whether it supports the three versions of conditionalization that we met above: Actual (AC), Plan (PC), and Dispositional (DC) Conditionalization. Since they speak directly of rules, let's begin with PC and DC.

The DSA shows that, if you endorse a deterministic rule that isn't a conditionalizing rule for your prior, then there is pair of decision problems, one that you'll face at the earlier time and the other at the later time, where your credences at the earlier time and your planned credences at the later time will require you to choose a dominated pair of options. And it seems reasonable to say that it is irrational to endorse a plan when you will be rendered vulnerable to a Dutch Strategy  if you follow through on it. So, for those who endorse deterministic rules, DSA plausibly supports Plan Conditionalization.

The same is true of Dispositional Conditionalization. Just as it is irrational to plan to update in a way that would render you vulnerable to a Dutch Strategy if you were to stick to the plan, it is surely irrational to be disposed to update in a way that will renders you vulnerable in this way. So, for those whose updating dispositions are deterministic, DSA plausibly supports Dispositional Conditionalization.

Finally, AC. There various different ways to move from either PC or DC to AC, but each one of them requires some extra assumptions. For instance:

(I) I might assume: (i) between an earlier and a later time, there is always a partition such that you know that the strongest pieces of evidence you might receive between those times is a proposition from that partition learned with certainty; (ii) if you know you'll receive evidence from some partition, you are rationally required to plan how you will update on each possible piece of evidence before you receive it; and (iii) if you plan how to respond to evidence before you receive it, you are rationally required to follow through on that plan once you have received it. Together with PC + DU, these give AC.

(II) I might assume: (i) you have updating dispositions. So, if you actually update other than by conditionalization, then it must be a manifestation of a  disposition other than conditionalizing. Together with DC + DU, this gives  AC.

(III) I might assume: (i) that you are rationally required to update in any way that can be represented as the result of updating on a plan that you were rationally permitted to endorse or as the result of dispositions that you were rationally permitted to have, even if you did not in fact endorse any plan prior to receiving the evidence nor have any updating dispositions. Again, together with PC + DU or DC + DU, this gives AC.

Notice that, in each case, it was essential to invoke Deterministic Updating (DU). As we will see below, this causes problems for AC.

DSA without Deterministic Updating


We have now seen how the DSA proceeds if we assume Deterministic Updating. But what if we don't? Consider, for instance, rule $R_3$ from our list of examples above:
$$R_3 = (\mathcal{E} = \{F, \overline{F}\}, \mathcal{C} = \{\{c^\circ_F, c^+_F\}, \{c^\circ_{\overline{F}}, c^+_{\overline{F}}\}\})$$
That is, if Condi learns $F$, rule $R_3$ allows her to update to $c^\circ_F$ or to $c^+_F$. And if she receives $\overline{F}$, it allows her to update to $c^\circ_{\overline{F}}$ or to $c^+_{\overline{F}}$. Notice that $R_3$ violates conditionalization thoroughly: it is not deterministic; and, moreover, as well as not mandating the posteriors that conditionalization demands, it does not even permit them. Can we adapt the DSA to show that $R_3$ is irrational? No. We cannot use Dutch Strategies to show that $R_3$ is irrational because it isn't vulnerable to them.

To see this, we first note that, while $R_3$ is not deterministic and not a conditionalizing rule, it is a pseudo-conditionalizing rule.  And to see that, it helps to state the following representation theorem for pseudo-conditionalizing rules.

Lemma 1 $R$ is a pseudo-conditionalizing pair for $c$ iff
  1. for all $E_i$ in $\mathcal{E}$ and $c'$ in $C_i$, $c'(E_i) = 1$, and
  2. $c$ is in the convex hull of the possible posteriors that $R$ permits.
But note:$$c(-) = 0.4c^\circ_F(-) + 0.4c^+_F(-) + 0.1c^\circ_{\overline{F}}(-) + 0.1 c^+_{\overline{F}}(-)$$
So $R_3$ is pseudo-conditionalizing. What's more:


Theorem 2
  • If $R$ is not a pseudo-conditionalizing rule for $c$, then $\langle c, R \rangle$ is vulnerable at least to a weak Dutch Strategy, and possibly also a strong Dutch Strategy.
  • If $R$ is a pseudo-conditionalizing rule for $c$, then $\langle c, R \rangle$ is not vulnerable to a weak Dutch Strategy.
Thus, $\langle c, R_3 \rangle$ is not vulnerable even to a weak Dutch Strategy. The DSA, then, cannot say what is irrational about Condi if she begins with prior $c$ and either endorses $R_3$ or is disposed to update in line with it. Thus, the DSA cannot justify Deterministic Updating. And without DU, it cannot support PC or DC either. After all, $R_3$ violates each of those, but it is not vulnerable even to a weak Dutch Strategy. And moreover, each of the three arguments for AC break down because they depend on PC or DC. The problem is that, if Condi updates from $c$ to $c^\circ_F$ upon learning $F$, she violates AC; but there is a non-deterministic updating rule---namely, $R_3$---that allows $c^\circ_F$ as a response to learning $F$, and, for all DSA tells us, she might have rationally endorsed $R_3$ before learning $F$ or she might rationally have been disposed to follow it. Indeed, the only restriction that DSA can place on your actual updating behaviour is that you should become certain of the evidence that you learned. After all:

Theorem 3 Suppose $c$ is your prior and $c'$ is your posterior. Then there is a rule $R$ such that:
  1. $c'$ is in $C_i$, and
  2. $R$ is a pseudo-conditionalizing rule for $c$
iff $c'(E_i) = 1$.

Thus, at the end of this section, we can conclude that, whatever is irrational about planning to update using non-deterministic but pseudo-conditionalizing updating rules, it cannot be that following through on those plans leaves you vulnerable to a Dutch Strategy, for it does not. And similarly, whatever is irrational about being disposed to update in those ways, it cannot be that those dispositions will equip you with credences that lead you to choose dominated options, for they do not. With PC and DC thus blocked, our route to AC is therefore also blocked.

The Expected Pragmatic Utility Argument (EPUA)


Let's look at EPUA next. Again, we will consider how our credences guide our actions when we face decision problems. In this case, there is no need to restrict attention to monetary decision problems. We will only consider a single decision problem, which we face at the later time, after we've received the evidence, so we won't have to combine the outcomes of multiple options as we did in the DSA. The idea is this. Suppose you will make a decision after you receive whatever evidence it is that you receive at the later time. And suppose that you will use your later updated credence function to make that choice---indeed, you'll choose from the available options by maximising expected utility from the point of view of your new updated credences. Which updating rules does your prior expect will lead you to make the choice best?

EPUA with Deterministic Updating


Suppose you'll face decision problem $\mathbf{d}$ after you've updated. And suppose further that you'll use a deterministic updating rule $R$. Then, if $w$ is a possible world and $E_i$ is the element of the evidential partition $\mathcal{E}$ that is true at $w$, the idea is that we take the pragmatic utility of $R$ relative to $\mathbf{d}$ at $w$ to be the utility at $w$ of whatever option from $\mathbf{d}$ we should choose if our posterior credence function were $c_i$, as $R$ requires it to be at $w$. But of course, for many decision problems, this isn't well defined because there is no unique option in $\mathbf{d}$ that maximises expected utility by the lights of $c_i$; rather there are sometimes many such options, and they might have different utilities at $w$. Thus, we need not only $c_i$ but also a selection function, which picks a single option from  any set of options. If $f$ is such a selection function, then let $A^{\mathbf{d}}_{c_i, f}$ be the option that $f$ selects from the set of options in $\mathbf{d}$ that maximise expected utility by the lights of $c_i$. And let
$$u_{\mathbf{d},f}(R, w) = u(A^{\mathbf{d}}_{c_i, f}, w).$$
Then the EPUA argument turns on the following mathematical fact (Brown 1976):

Theorem 4 Suppose $R$ and $R^\star$ are both deterministic updating rules. Then:
  • If $R$ and $R^\star$ are both conditionalizing rules for $c$, and $f$, $g$ are selection functions, then for all decision problems $\mathbf{d}$ $$\sum_{w \in W} c(w) u_{\mathbf{d}, f}(R, w) = \sum_{w \in W} c(w) u_{\mathbf{d}, g}(R^\star, w)$$
  • If $R$ is a conditionalizing rule for $c$, and $R^\star$ is not, and $f$, $g$ are selection functions, then for all decision problems $\mathrm{d}$, $$\sum_{w \in W} c(w) u_{\mathbf{d}, f}(R, w) \geq \sum_{w \in W} c(w) u_{\mathbf{d}, g}(R^\star, w)$$with strict inequality for some decision problems $\mathbf{d}$.
That is, a deterministic updating rule maximises expected pragmatic utility by the lights of your prior just in case it is a conditionalizing rule for your prior.

As in the case of the DSA above, then, if we assume Deterministic Updating (DU), we can establish PC and DC, and on the back of those AC as well. After all, it is surely irrational to plan to update in one way when you expect another way to guide your actions better in the future; and it is surely irrational to be disposed to update in one way when you expect another to guide you better. And as before there are the same three arguments for AC on the back of PC and DC.

EPUA without Deterministic Updating


How does EPUA fare when we widen our view to include non-deterministic updating rules as well? An initial problem is that it is no longer clear how to define the pragmatic utility of such an updating rule relative to a decision problem at a possible world. Above, we said that, relative to a decision problem $\mathbf{d}$ and a selection function $f$, the pragmatic utility of rule $R$ at world $w$ is the utility of the option that you would choose when faced with $\mathbf{d}$ using the credence function that $R$ mandates at $w$ and $f$: that is, if $E_i$ is true at $w$, then
$$u_{\mathbf{d}, f}(R, w) = u(A^{\mathbf{d}}_{c_i, f}, w).$$
But, if $R$ is not deterministic, there might be no single credence function that it mandates at $w$. If $E_i$ is the piece of evidence you'll learn at $w$ and $R$ permits more than one credence function in response to $E_i$, then there might be a range of different options in $\mathbf{d}$, each of which maximises expected utility relative to a different credence function $c'$ in $C_i$. So what are we to do?

Our response to this problem depends on whether we wish to argue for Plan or Dispositional Conditionalization (PC or DC). Suppose, first, that we are interested in DC. That is, we are interested in a norm that governs the updating rule that records how you are disposed to update when you receive certain evidence. Then it seems reasonable to assume that the updating rule that records your dispositions is stochastic. That is, for each possible piece of evidence $E_i$ and each possible response $c'$ in $C_i$ to that evidence that you might adopt in response to receiving that evidence, there is some objective chance that you will respond to $E_i$ by adopting $c'$. As we explained above, we'll write this $P(R^i_{c'} | E_i)$, where $R^i_{c'}$ is the proposition that you receive $E_i$ and respond by adopting $c'$. Then, if $E_i$ is true at $w$,  we might take the pragmatic utility of $R$ relative to $\mathbf{d}$ and $f$ at $w$ to be the expectation of the utility of the options that each permitted response to $E_i$ (and selection function $f$) would lead us to choose:
$$u_{\mathbf{d}, f}(R, w) = \sum_{c' \in C_i} P(R^i_{c'} | E_i) u(A^{\mathbf{d}}_{c', f}, w)$$
With this in hand, we have the following result:

Theorem 5 Suppose $R$ and $R^\star$ are both updating rules. Then:
  • If $R$ and $R^\star$ are both conditionalizing rules for $c$, and $f$, $g$ are selection functions, then for all decision problems $\mathbf{d}$, $$\sum_{w \in W} c(w) u_{\mathbf{d}, f}(R, w) = \sum_{w \in W} c(w) u_{\mathbf{d}, g}(R^\star, w)$$
  • $R$ is a conditionalizing rule for $c$, and $R^\star$ is a stochastic but not conditionalizing rule, and $f$, $g$ are selection functions, then for all decision problems $\mathbf{d}$,$$\sum_{w \in W} c(w) u_{\mathbf{d}, f}(R, w) \geq \sum_{w \in W} c(w) u_{\mathbf{d}, g}(R^\star, w)$$with strictly inequality for some decision problems $\mathbf{d}$.
This shows the first difference between the DSA and EPUA. The latter, but not the former, provides a route to establishing Dispositional Conditionalization (DC). If we assume that your dispositions are governed by a chance function, and we use that chance function to calculate expectations, then we can show that your prior will expect your posteriors to do worse as a guide to action unless you are disposed to update by conditionalizing on the evidence you receive.

Next, suppose we are interested in Plan Conditionalization (PC). In this case, we might try to appeal again to Theorem 5. To do that, we must assume that, while there are non-deterministic updating rules that we might endorse, they are all at least stochastic updating rules; that is, they all come equipped with a probability function that determines how likely it is that I will adopt a particular permitted response to the evidence. That is, we might say that the updating rules that we might endorse are either deterministic or non-deterministic-but-stochastic. In the language of game theory, we might say that the updating strategies between which we choose are either pure or mixed. And then Theorem 5 will show that we should adopt a deterministic-and-conditionalizing rule, rather than any deterministic-but-non-conditionalizing or non-deterministic-but-stochastic rule. The problem with this proposal is that it seems just as arbitrary to restrict to deterministic and non-deterministic-but-stochastic rules as it was to restrict to deterministic rules in the first place. Why should we not be able to endorse a non-deterministic and non-stochastic rule---that is, a rule that says, for at least one possible piece of evidence $E_i$ in $\mathcal{E}$, there are two or more posteriors that the rule permits as responses, but does not endorse any chance mechanism by which we'll choose between them? But if we permit these rules, how are we to define their pragmatic utility relative to a decision problem and at a possible world?

Here's one suggestion. Suppose $E_i$ is the proposition in $\mathcal{E}$ that is true at world $w$. And suppose $\mathbf{d}$ is a decision problem and $f$ is a selection rule. Then we might take the pragmatic utility of $R$ relative to $\mathbf{d}$ and $f$ and at $w$ to be the average utility of the options that each permissible response to $E_i$ and $f$ would choose when faced with $\mathbf{d}$. That is,$$u_{\mathbf{d}, f}(R, w) = \frac{1}{|C_i|} \sum_{c' \in C_i}  u(A^{\mathbf{d}}_{c', f}, w)$$where $|C_i|$ is the size of $C_i$, that is, the number of possible responses to $E_i$ that $R$ permits. If that's the case, then we have the following:

Theorem 6 Suppose $R$ and $R^\star$ are updating rules. Then if $R$ is a conditionalizing rule for $c$, and $R^\star$ is not deterministic, not stochastic, and not a conditionalizing rule for $c$, and $f$, $g$ are selection functions, then for all decision problems $\mathbf{d}$,
$$\sum_{w \in W} c(w) u_{\mathbf{d}, f}(R, w) \geq \sum_{w \in W} c(w) u_{\mathbf{d}, f}(R^\star, w)$$with strictly inequality for some decision problems $\mathbf{d}$.

Put together with Theorems 4 and 5, this shows that our prior expects us to do better by endorsing a conditionalizing rule than by endorsing any other sort of rule, whether that is a deterministic and non-conditionalizing rule, a non-deterministic but stochastic rule, or a non-deterministic and non-stochastic rule.

So, again, we see a difference between DSA and EPUA. Just as the latter, but not the former, provides a route to establishing DC without assuming Deterministic Updating, so the latter but not the former provides a route to establishing PC without DU. And from both of those, we have the usual three routes to AC. This means that EPUA explains what might be irrational about endorsing a non-deterministic updating rule, or having dispositions that match one. If you do, there's some alternative updating rule that your prior expects to do better as a guide to future action.

Expected Epistemic Utility Argument (EEUA)


The previous two arguments criticized non-conditionalizing updating rules from the standpoint of pragmatic utility. The EEUA and EUDA both criticize such rules from the standpoint of epistemic utility. The idea is this: just as credences play a pragmatic role in guiding our actions, so they play other roles as well---they represent the world;  they respond to evidence; they might be more or less coherent. These roles are purely epistemic. And so just as we defined the pragmatic utility of a credence function at world when faced with a decision problem, so we can also define the epistemic utility of a credence function at a world---it is a measure of how valuable it is to have that credence function from a purely epistemic point of view.

EEUA with Deterministic Updating


We will not give an explicit definition of the epistemic utility of a credence function at a world. Rather, we'll simply state two properties that we'll take measures of such epistemic utility to have. These are widely assumed in the literature on epistemic utility theory and accuracy-first epistemology, and I'll defer to the arguments in favour of them that are outlined there (Joyce 2009, Pettigrew 2016, Horowitz 2019).

A local epistemic utility function is a function $s$ that takes a single credence and a truth value---either true (1) or false (0)---and returns the epistemic value of having that credence in a proposition with that truth value. Thus, $s(1, p)$ is the epistemic value of having credence $p$ in a truth, while $s(0, p)$ is the epistemic value of having credence $p$ in a falsehood. A global epistemic utility function is a function $EU$ that takes an entire credence function defined on $\mathcal{F}$ and a possible world and returns the epistemic value of having that credence function when the propositions in $\mathcal{F}$ have the truth values they have in that world.

Strict Propriety  A local epistemic utility function $s$ is strictly proper if each credence expects itself and only itself to have the greatest epistemic utility. That is, for all $0 \leq p \leq 1$,$$
ps(1, x) + (1-p) s(0, x)$$
is maximised, as a function of $x$ at $p = x$.

Additivity  A global epistemic utility function is additive if, for each proposition $X$ in $\mathcal{F}$, there is a local epistemic utility function $s_X$ such that the epistemic utility of a credence function $c$ at a possible world is the sum of the epistemic utilities at that world of the credences it assigns. If $w$ is a possible world and we write $w(X)$ for the truth value (0 or 1) of proposition $X$ at $w$, this says:$$EU(c, w) = \sum_{X \in \mathcal{F}} s_X(w(X), c(X))$$

We then define the epistemic utility of a deterministic updating rule $R$ in the same way we defined its pragmatic utility above: if $E_i$ is true at $w$, and $C_i = \{c_i\}$, then
$$EU(R, w) = EU(c_i, w)$$Then the standard formulation of the EEUA turns on the following theorem (Greaves & Wallace 2006):

Theorem 7 Suppose $R$ and $R^\star$ are deterministic updating rules. Then:
  • If $R$ and $R^\star$ are both conditionalizing rules for $c$, then$$\sum_{w \in W} c(w) EU(R, w) = \sum_{w \in W} c(w) EU(R^\star, w)$$
  • If $R$ is a conditionalizing rule for $c$ and $R^\star$ is not, then$$\sum_{w \in W} c(w) EU(R, w) > \sum_{w \in W} c(w) EU(R^\star, w)$$
That is, a deterministic updating rule maximises expected epistemic utility by the lights of your prior just in case it is a conditionalizing rule for your prior.
So, as for DSA and EPUA, if we assume Deterministic Updating, we obtain an argument for PC and DC, and indirectly one for AC too.

EEUA without Deterministic Updating


If we don't assume Deterministic Updating, the situation here is very similar to the one we encountered above when we considered EPUA. Suppose $R$ is a non-deterministic but stochastic updating rule. Then, as above, we let its epistemic utility at a world be the expectation of the epistemic utility that the various possible posteriors permitted by $R$ take at that world. That is, if $E_i$ is the proposition in $\mathcal{E}$ that is true at $w$, then$$EU(R, w) = \sum_{c' \in C_i} P(R_{c'} | E_i) EU(c', w)$$Then, we have a similar result to Theorem 5:

Theorem 8 Suppose $R$ and $R^\star$ are updating rules. Then if $R$ is a conditionalizing rule for $c$, and $R^\star$ is stochastic but not a conditionalizing rule for $c$, then
$$\sum_{w \in W} c(w) EU(R, w) > \sum_{w \in W} c(w) EU(R^\star, w)$$

Next, suppose $R$ is a non-deterministic but also a non-stochastic rule. Then we let its epistemic utility at a world be the average epistemic utility that the various possible posteriors permitted by $R$ take at that world. That is, if $E_i$ is the proposition in $\mathcal{F}$ that is true at $w$, then
$$EU(R, w) = \frac{1}{|C_i|}\sum_{c' \in C_i} EU(c', w)$$And again we have a similar result to Theorem 6:

Theorem 9 Suppose $R$ and $R^\star$ are updating rules. Then if $R$ is a conditionalizing rule for $c$, and $R^\star$ is not deterministic, not stochastic, and not a conditionalizing rule for $c$. Then:
$$\sum_{w \in W} c(w) EU(R, w) > \sum_{w \in W} c(w) EU(R^\star, w)$$

So the situation is the same as for EPUA. Whether we assess a rule by looking at how well the posteriors it produces guide our future actions, or how good they are from a purely epistemic point of view, our prior will expect a conditionalizing rule for itself to be better than any non-conditionalizing rule. And thus we obtain PC and DC, and indirectly AC as well.

Epistemic Utility Dominance Argument (EUDA)


Finally, we turn to the EUDA. In EPUA and EEUA, we assess the pragmatic or epistemic utility of the updating rule from the viewpoint of the prior. In DSA, we assess the prior and updating rule together, and from no particular point of view; but, unlike the EPUA and EEUA, we do not assign utilities, either pragmatic or epistemic, to the prior and the rule. In EUDA, like in DSA and unlike EPUA and EEUA, we assess the the prior and updating rule together, and again from no particular point of view; but unlike in DSA and like in EPUA and EEUA, we assign utilities to them---in particular, epistemic utilities---and assess them with reference to those.

EUDA with Deterministic Updating


Suppose $R$ is a deterministic updating rule. Then, as before, if $E_i$ is true at $w$, let the epistemic utility of $R$ be the epistemic utility of the credence function $c_i$ that it mandates at $w$: that is,$$EU(R, w) = EU(c_i, w).$$
But this time also let the epistemic utility of the pair $\langle c, R \rangle$ consisting of the prior and the updating rule be the sum of the epistemic utility of the prior and the epistemic utility of the updating rule: that is,$$EU(\langle c, R \rangle, w) = EU(c, w) + EU(R, w) = EU(c, w) + EU(c_i, w)$$
Then the EUDA turns on the following mathematical fact (Briggs & Pettigrew 2018):

Theorem 10  Suppose $EU$ is an additive, strictly proper epistemic utility function. And suppose $R$ and $R^\star$ are deterministic updating rules. Then:
  • if $\langle c, R \rangle$ is non-conditionalizing, there is $\langle c^\star, R^\star \rangle$ such that, for all $w$ $$EU(\langle c, R \rangle, w) < EU(\langle c^\star, R^\star \rangle, w))$$
  • if $\langle c, R \rangle$ is conditionalizing, there is no $\langle c^\star, R^\star \rangle$ such that, for all $w$ $$EU(\langle c, R \rangle, w) < EU(\langle c^\star, R^\star \rangle, w))$$
That is, if $R$ is not a conditionalizing rule for $c$, then together they are $EU$-dominated; if it is a conditionalizing rule, they are not. Thus, like EPUA and EEUA and unlike DSA, if we assume Deterministic Updating, EUDA gives PC, DC, and indirectly AC.

EUDA without Deterministic Updating


Now suppose we permit non-deterministic updating rules as well as deterministic ones. In this case, there are two approaches we might take. On the one hand, we might define the epistemic utility of non-deterministic rules, both stochastic and non-stochastic, just as we did for EEUA. That is, we might take the epistemic utility of a stochastic rule at a world to be the expectation of the epistemic utility of the various posteriors that it permits in response to the evidence that you obtain at that world; and the epistemic utility of a non-stochastic rule at a world is the average of those epistemic utilities. This gives us the following result:

Theorem 11  Suppose $EU$ is an additive, strictly proper epistemic utility function. Then, if $\langle c, R \rangle$ is not a conditionalizing pair, there is an alternative pair $\langle c^\star, R^\star \rangle$ such that, for all $w$, $$EU(\langle c, R \rangle, w) < EU(\langle c^\star, R^\star \rangle, w)$$And this therefore supports an argument for PC and DC and indirectly AC as well.

On the other hand, we might consider more fine-grained possible worlds, which specify not only the truth value of all the propositions in $\mathcal{F}$, but also which posterior I adopt. We can then ask: given a particular pair $\langle c, R \rangle$, is there an alternative pair $\langle c^\star, R^\star \rangle$ that has greater epistemic utility at every fine-grained world by the lights of $EU$? If we judge updating rules by this standard, we get a rather different answer. If $E_i$ is the element of $\mathcal{E}$ that is true at $w$, and $c'$ is in $C_i$ and $c^{\star \prime}$ is in $C^\star_i$, then we write $w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}}$ for the more fine-grained possible world we obtain from $w$ by adding that $R$ updates to $c'$ and $R^\star$ updates to $c^{\star\prime}$ upon receipt of $E_i$. And let
  • $EU(\langle c, R \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}} ) = EU(c, w) + EU(c', w)$
  • $EU(\langle c^\star, R^\star \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}} ) = EU(c^\star, w) + EU(c^{\star\prime}, w)$
Then:
Theorem 12  Suppose $EU$ is an additive, strictly proper epistemic utility function. Then:
  • If $\langle c, R \rangle$ is a pseudo-conditionalizing pair, there is no alternative pair $\langle c^\star, R^\star\rangle$ such that, for all $E_i$ in $\mathcal{E}$, $w$ in $E_i$, $c'$ in $C_i$ and $c^{\star\prime}$ in $C^\star_i$, $$EU(\langle c, R \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}} ) < EU(\langle c^\star, R^\star \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}})$$
  • There are pairs $\langle c, R \rangle$ that are non-conditionalizing and non-pseudo-conditionalizing for which there is no alternative pair $\langle c^\star, R^\star\rangle$ such that, for all $E_i$ in $\mathcal{E}$, $w$ in $E_i$, $c'$ in $C_i$ and $c^{\star\prime}$ in $C^\star_i$, $$EU(\langle c, R \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}} ) < EU(\langle c^\star, R^\star \rangle, w\ \&\ R^i_{c'}\ \&\ R^{\star i}_{c^{\star \prime}})$$
Interpreted in this way, then, and without the assumption of Deterministic Updating, EUDA is the weakest of all the arguments. Where DSA at least establishes that your updating rule should be pseudo-conditionalizing for your prior, even if it does not establish that it should be conditionalizing, EUDA does not establish even that.

Conclusion


One upshot of this investigation is that, so long as we assume Deterministic Updating (DU), all four arguments support the same conclusions, namely, Plan and Dispositional Conditionalization, and also Actual Conditionalization. But once we drop DU, that agreement vanishes.

Without DU, DSA shows only that, if we plan to update using a particular rule, it should be a pseudo-conditionalizating rule for our prior; and similarly for our dispositions. As a result, it cannot support AC. Indeed, it can support only the weakest restrictions on our actual updating behaviour, since nearly any such behaviour can be seen as an implementation of a pseudo-conditionalizing rule.

EPUA and EEUA are much more hopeful. Let's consider our updating dispositions first. It seems natural to assume that, even if these are not deterministic, they are at least governed by some objective chances. If so, this gives a natural definition of the pragmatic and epistemic utilities of my updating dispositions at a world---they are expectations of pragmatic and epistemic utilities the posteriors, calculated using the objective chances. And, relative to that, we can in fact establish DU---we no longer need to assume it. With that in hand, we regain DC and two of the routes to AC.

Next, let's consider the updating plans we endorse. It also seems natural to assume that those plans, if not deterministic, might not be stochastic either. And, if that's the case, we can take their pragmatic or epistemic utility at a world to be the average pragmatic or epistemic utility of the different possible credence functions they endorse as responses to the evidence you gain at that world. And, relative to that, we can again establish DU. And with it PC and two of the routes to AC.

Finally, EUDA is a mixed bag. Understanding the epistemic and pragmatic utility of an updating rule as we have just described gives us DU and with it PC, DC, and AC. But if we take a fine-grained approach, we cannot even establish that your updating rule should be a pseudo-conditionalizing rule for your prior.

Proofs

For proofs of the theorems in this post, please see the paper version here.

Tuesday, 5 March 2019

Dutch Books, Money Pumps, and 'By Their Fruits' Reasoning

There is a species of reasoning deployed in some of the central arguments of formal epistemology and decision theory that we might call 'by their fruits' reasoning. It seeks to establish certain norms of rationality that govern our mental states by showing that, if your mental states fail to satisfy those norms, they lead you to make choices that have some undesirable feature. Thus, just as we might know false prophets by their behaviour, and corrupt trees by their evil fruit, so can we know that certain attitudes are irrational by looking not to them directly but to their consequences. For instance, the Dutch Book argument seeks to establish the norm of Probabilism for credences, which says that your credences should satisfy the axioms of the probability calculus. And it does this by showing that, if your credences do not satisfy those axioms, they will lead you to enter into a series of bets that, taken together, lose you money for sure (Ramsey 1931, de Finetti 1937). The Money Pump argument seeks to establish, among other norms, the norm of Transitivity for preferences, which says that if you prefer one option to another and that other to a third, you should prefer the first option to the third. And it does this by showing that, if your preferences are not transitive, they will lead you, again, to make a series of choices that loses you money for sure (Davidson, et al. 1955). Both of these arguments use 'by their fruits' reasoning. In this paper, I will argue that such arguments fail. I will focus particularly on the Dutch Book argument so that I can illustrate the points with examples. But the objections I raise apply equally to Money Pump arguments.

The Dutch Book argument: an example


Joachim is more confident that Sarah is an astrophysicist and a climate activist (proposition $A\ \&\ B$) than he is that she is an astrophysicist (proposition $A$). He is 60% confident in $A\ \&\ B$ and only 30% confident in $A$. But $A\ \&\ B$ entails $A$. So, intuitively, Joachim's credences are irrational.

How can we establish this? According to the Dutch Book argument, we look to the choices that Joachim's credences will lead him to make. The first premise of that argument posits a connection between credences and betting behaviour. Suppose $X$ is a proposition and $S$ is a number $S$, positive, negative, or zero. Then a £$S$ bet on $X$ is a bet that pays £$S$ if $X$ is true and £$0$ if $X$ is false. £$S$ is the stake of the bet. The first premise of the Dutch Book argument says that, if you have credence $p$ in $X$, you will buy a £$S$ bet on $X$ for anything less than £$pS$. That is, the more confident you are in a proposition, the greater a proportion of the stake you are prepared to pay to buy it. Thus, in particular:
  • Bet 1: Joachim will buy a £$100$ bet on $A\ \&\ B$ for £$50$;
  • Bet 2: Joachim will sell a £$100$ bet on $A$ for £$40$.
The total net gain of these bets, taken together, is guaranteed to be negative. Thus, his credences will lead him to a perform a pair of actions that, taken together, loses him money for sure. This is the second premise of the Dutch Book argument against Joachim. We say that this pair of actions (buy the first bet for £$40$; sell the second for £$50$) is dominated by the pair of actions in which he refuses to enter into each bets (refuse the first bet; refuse the second). The latter pair is guaranteed to result in greater total value than the former pair; the latter pair results in no loss and no gain, while the former results in a loss for sure. The third premise of the Dutch Book argument contends that, since it is undesirable to choose a pair of dominated options, it is irrational to have credences that lead you to do this. Ye shall know them by their fruits.

Thus, a Dutch Book argument has three premises. The first premise posits a connection between having a particular credence in a proposition and accepting certain bets on that proposition. The second is a mathematical theorem that shows that, if the first premise is true, and if your credences do not satisfy the probability axioms, they will lead you to make a series of choices that is dominated by some alternative series of choices you might have made instead; the third premise says that your credences are irrational if, together with the connection posited in the first premise, they lead you to choose a dominated series of options. My objection is this: there is no account of the connection between credences and betting behaviour that makes both the first and third premise plausible; those accounts strong enough to make the third premise plausible are too strong to make the first premise plausible. Our strategy will be to enumerate the possible putative accounts of that connection and show either that either the first or the third premise is false when we adopt that account.

Let $C(p, X)$ be the proposition that you have credence $p$ in proposition $X$;  and let $B(x, S, X)$ be the proposition that you pay £$x$ for a £$S$ bet on $X$. Then the first premise of the Dutch Book argument has the following form:

For all credences $p$, propositions $X$, prices $x$, and stakes $S$, if $x < pS$
$$O(C(p, X) \rightarrow B(x, S, X))$$where $O$ is a modal operator. But which modal operator? Different answers to this constitute different versions of the connection between credences and betting behaviour that appears in the first and third premise of the Dutch Book argument. We will consider six different candidate operators and argue that none makes the first and third premises both true. The six candidates are: metaphysical necessity; nomological necessity; nomological high probability; deontic necessity; deontic possibility; and the modality of defeasible reasons.

$O$ is metaphysical necessity


We begin with metaphysical modality. According to this account, the first premise of the Dutch Book argument says that it is metaphysically impossible to have a credence of $p$ in $X$ while refusing to pay £$x$ for a £$S$ bet on $X$ (for $x < pS$). If you were to refuse such a bet, that would simply mean that you do not have that credence. This sort of account would be appealing to a behaviourist, who seeks an operational definition of what it means to have a particular precise credence in a proposition---a definition in terms of betting behaviour might well satisfy them.

If this account were true, the third premise of the Dutch Book argument would be plausible. If having a set of mental states were to guarantee as a matter of metaphysical necessity that you'd make a dominated series of choices when faced with a particular series of decisions, that seems sufficient to show that those credences are irrational. The problem is that, as David Christensen (1996) shows, the account itself cannot be true. Christensen's central point is this: credences are often and perhaps typically connected to betting behaviour and decision-making more generally; but they are often and perhaps typically connected to other things as well, such as emotional states, conative states, and other doxastic states. If I have a high credence that my partner loves me, I'm likely to pay a high proportion of the stake for a bet on it; but I'm also likely to feel joy, plan to spend more time with him, hope that his love continues, and believe that we will still be together in five years' time. What's more, none of these connections is obviously more important than any other in determining that a mental state is a credence. And each might fail while the others hold. Indeed, as Christensen notes, in Dutch Book arguments, we are concerned precisely with those cases in which there is a breakdown of the rationally required connections between credences, namely, the connections described by the probability axioms. Having a credence in one proposition usually leads you to have at least as high a credence in another proposition it entails. But, as we saw in Joachim's case, this connection can break down. So, just as Joachim's case shows that it is metaphysically possible to have a particular credence  that has all the other connections that we typically associate with it except the connection to other credences, so it must be at least metaphysically possible to have a credence has all the other connections that we associate with it but not the connection to betting behaviour posited by the first premise. Such a mental state would still count as the credence in question because of all the other connections; but it wouldn't give rise to the apparently characteristic betting behaviour that is required to run the Dutch Book argument. Moreover, note that we need not assume that the credence has none of the usual connections to betting behaviour. Consider Joachim again. Every Dutch Book to which he is vulnerable involves him buying a bet on $A\ \&\ B$ and selling a bet on $A$. That is, it involves him buying a bet on $A\ \&\ B$ with a positive stake and buying a bet on $A$ with a negative stake. So he would evade the argument if his credence in $A\ \&\ B$ were to lead him to buy the bets with any stake that the first premise says they will, while his credence in $A$ were only to lead him to buy the bets with positive stake that the first premise says they will. In this case, we'd surely say he has the credences we assign to him. But he would not be vulnerable to a Dutch Book argument.

Thus, if $O$ is metaphysical necessity, the third premise might well be true; but the first premise is false.

$O$ is nomological necessity


Learning from the problems with the previous proposal, we might retreat to a weaker modality. For instance, we might suggest that $O$ is a nomological modality. There are two that it might be. We might say that the connection between credences and betting behaviour posited by the first premise is nomologically necessary---that is, it is entailed by the laws of nature. Or, we might say that it is nomologically highly probable---that is, the objective chance of the consequent given the antecedent is high. Let's take them in turn.

First, $O$ is nomological necessity. The problem with this is the same as the problem with the suggestion from the previous section that $O$ is metaphysical necessity. Above, we imagined a mental state that had all the other features we'd typically expect of a particular credence in a proposition, except some range of connections to betting behaviour that was crucial for the Dutch Book argument. We noted that this would still count as the credence in question. All that needs to be added here is that the example we considered is not only metaphysically possible, but also nomologically possible. That is, this is not akin to an example in which the fine structure constant is different from what it actually is---in that case, it would be metaphysically possible, but nomologically impossible. There is no law of nature that entails that your credence will lead to particular betting behaviour.

Thus, again, the first premise is false.

$O$ is nomological high probability


Nonetheless, while it is not guaranteed by the laws of nature that an individual with a particular credence in a proposition will engage in the betting behaviour posited by the first premise, it does seem plausible that they are very likely to do so---that is, the objective chance that they will display the behaviour given that they have the credence is high. In other words, while weakening from metaphysical to nomological necessity doesn't make the first premise plausible, weakening further to nomological high probability does. So let's suppose, then, that $O$ is nomological high probability. Unfortunately, this causes two problems for the third premise.

Here's the first. Suppose I have credences in 1,000 mutually exclusive and exhaustive propositions. And suppose each credence is $\frac{1}{1,001}$. So they violate Probabilism. Suppose further that each credence is 99% likely to give rise to the betting behaviour mentioned in the first premise of the Dutch Book argument; and suppose that whether one of the credences does or not is independent of whether any of the others does or not. Then the objective chance that the set of 1,000 credences will give rise to the betting behaviour that will lose me money for sure is $0.99^{1,000} = 0.00004 \approx \frac{1}{23,163}$. And this tells against the third premise. After all, what is so irrational about a set of credences that will lead to a dominated series of choices less than once in every 20,000 times I face the bets described in the Dutch Book argument against me?

Here's the second problem. On the account we are considering, having a particular credence in a proposition makes it highly likely that you'll bet in a particular way. Let's say, then, that you violate Probabilism, and your credences do indeed result in you making a dominated series of choices. The third premise infers from this that your credences are irrational. But why lay the blame at the credences' door? After all, there is another possible culprit, namely, the probabilistic connection between the credence and the betting behaviour. Consider an analogy. Suppose that, as the result of some bizarre causal pathway, when I fall in love, it is very likely that I will feed myself a diet of nothing but mud and leaves for a week. I hate the taste of the mud and the leaves make me very sick, and so I lower my utility considerably by responding in this way. But I do it anyway. In this case, we would not, I think, say that it is irrational to fall in love. Rather, we'd say that what is irrational is my response to falling in love. Similarly, suppose I make a dominated series of choices and thus reveal some irrationality in myself. Then, for all the Dutch Book argument says, it might be that the irrationality lies not in the credences, but rather in my response to having those credences.

Thus, on this account, the first premise is plausible, but the third premise is unmotivated, for it imputes irrationality to my credences when it might instead lie in my response to having those credences.

$O$ is deontic necessity


A natural response to the argument of the previous section is that the analogy between the credence-betting connection and the love-diet connection fails because the first is a rationally appropriate connection, while the latter is not. This leads us to suggest, along with Christensen (1996), that the connection between credences and betting behaviour at the heart of the Dutch Book argument is not governed by a descriptive modality, such as metaphysical or nomological modality, but rather by a prescriptive modality, such as deontic modality. In particular, it suggests that what the first premise says is not that someone with a particular credence in a proposition will or might or probably will accept certain bets on that proposition; but rather that they should or may or have good but defeasible reason to do so.

Let's begin with deontic necessity. Here, my objection is that, if this is the modality at play in the first and third premise, then the argument is self-defeating. To see why, consider Joachim again. Suppose the modality is deontic necessity, and suppose that the first premise is true. So Joachim is rationally required to make a dominated series of choices---buy the £$100$ bet on $A\ \&\ B$ for £$50$; sell the £$100$ bet on $A$ for £$40$. Now suppose further that the third premise is true as well---it does, after all, seem plausible on this account of the modality involved. Then we conclude that Joachim's credences are irrational. But surely it is not rationally required to choose in line with irrational credences. Surely what is rationally required of Joachim instead is that he should correct his irrational credences so that they are now rational, and he should then choose in line with his new rational credences. Now, whatever other features they have, his new rational credences must obey Probabilism. If not, they will be vulnerable to the Dutch Book argument and thus irrational. But the Converse Dutch Theorem shows that, if they obey Probabilism, they will not rationally require or even permit Joachim to make a dominated series of choices. And, in particular, they neither require nor permit him to accept both of the bets described in the original argument. But from this we can conclude that the first premise is false. Joachim's original credences do not rationally require him to accept both of the bets; instead, rationality requires him to fix up those credences and choose in line with the credences that result. But those new fixed up credences do not require what the first premise says they require. Indeed, they don't even permit what the first premise says they require. So, if the premises of the Dutch Book argument are true, Joachim's credences are irrational, and thus the first premise of the argument is false.

Thus, on this account, the Dutch Book argument is self-defeating: if it succeeds, its first premise is false.

$O$ is deontic possibility


A similar problem arises if we take the modality to be deontic possibility, rather than necessity. On this account, the first premise says not that Joachim is required to make each of the choices in the dominated series of choices, but rather that he is permitted to do so. The third premise must then judge a person irrational if they are permitted to accept each choice in a dominated series of choices. If we grant that, we can conclude that Joachim's credences are irrational. And again, we note that rationality therefore requires him to fix up those credences first and then to choose in line with the fixed up credences. But just as those fixed up credences don't require him to make each of the choices in the dominated series, so they don't permit him to make them either. So the Dutch Book argument, if successful, undermines its first premise again.

Again, the Dutch Book argument is self-defeating.

$O$ is the modality of defeasible reasons


The final possibility we will consider: Joachim's credences neither rationally require nor rationally permit him to make each of the choices in the dominated series; but perhaps we might say that each credence gives him a pro tanto or defeasible reason to accept the corresponding bet. That is, we might say that Joachim's credence of 60% in $A\ \&\ B$ gives him a pro tanto or defeasible reason to buy a £$100$ bet on $A\ \&\ B$ for £$50$, while his credence of 30% in $A$ gives him a pro tanto or defeasible reason to sell a £$100$ bet on $A$ for £$40$. As we saw above, those reasons must be defeasible, since they will be defeated by the fact that Joachim's credences, taken together, are irrational. Since they are irrational, he has stronger reason to fix up those credences and choose in line with the fixed up one than he has to choose in line with his original credences. But his original credences nonetheless still provide some reason in favour of accepting the bets.*

Rendered thus, I think the first premise is quite plausible. The problem is that the third premise is not. It must say that it is irrational to have any set of mental states where (i) each state in the set gives pro tanto reason to make a particular choice and (ii) taken together, that series of choices is dominated by another series of choices. But that is surely false. Suppose I believe this car in front of me is two years old and I also believe it's done 200,000 miles. The first belief gives me pro tanto or defeasible reason to pay £$5,000$ for it. The second gives me pro tanto reason to sell it for £$500$ as soon as I own it. Doing both of these things will lose me £$4,500$ for sure. But there is nothing irrational about my two beliefs. The problem arises only if I make decisions in line with the reasons given by just one of the beliefs, rather than taking into account my whole doxastic state. If I were to attend to my whole doxastic state, I'd never pay £$5,000$ for the car in the first place.  And the same might be said of Joachim. If he pays attention only to the reasons given by his credence in $A\ \&\ B$ when he considers the bet on that proposition, and pays attention only to the reasons given by his credence in $A$ when he considers the bet on that proposition, he will choose a dominated series of options. But if he looks to the whole credal state, and if the Dutch Book argument succeeds, he will see that its irrationality defeats those reasons and gives him stronger reason to fix up his credences and act in line with those. In sum, there is nothing irrational about a set of mental states each of which individually gives you pro tanto or defeasible reason to choose an option in a dominated series of options.

On this account, the first premise may be true, but the third is false.

Conclusion


In conclusion, there is no account of the modality involved in the first and third premises of the Dutch Book argument that can make both premises true. Metaphysical and nomological necessity are too strong to make the first premise true. Nomological high probability is not, but it does not make the third premise true. Deontic necessity and possibility render the argument self-defeating, for if the arguments succeeds, the first premise must be false. Finally, the modality of defeasible reasons, like nomological high probability, renders the first premise plausible. But it is not sufficient to secure to the third premise.

Before we conclude, let's consider briefly how these considerations affect money pump arguments. The first premise of a money pump argument does not posit a connection between credences and betting behaviour, but between preferences and betting behaviour. In particular: if I prefer one option to another, there will be some small amount of money I'll be prepared to pay to receive the first option rather than the second. As with the Dutch Book argument, the question arises what the modal force of this connection is. And indeed the same candidates are available. What's more, the same considerations tell against each of those candidates. Just as credences are typically connected not only to betting behaviour but also to emotional states, intentional states, and other doxastic states, so preferences are typically connected to emotional states, intentional states, and other preferences. If I prefer one option to another, then this might typically lead me to pay a little to receive the first rather than the second; but it will also typically lead me to hope that I will receive the first rather than the second, to fear that I'll receive the second, to intend to choose the first over the second when faced with such a choice, and to have a further preference for the first and a small loss of money over the second. And again the connections to behaviour are no more central to this preference than the connections to the emotional states of hope and fear, the intentions to choose, and the other preferences. So the modal force of the connection posited by the first premise cannot be metaphysical or nomological necessity. And for the same reasons as above, it cannot be nomological high probability, deontic necessity or possibility, or the modality of defeasible reasons. In each case, the same objections hold.

So these two central instances of 'by their fruits' reasoning fail. We cannot give an account of the connection between the mental states and their evil fruit that renders the argument successful.

[* Thanks to Jason Konek for pushing me to consider this account.]

References


  • Christensen, D. (1996). Dutch-Book Arguments Depragmatized: Epistemic Consistency for Partial Believers. The Journal of Philosophy, 93(9), 450–479. 
  • Davidson, D., McKinsey, J. C. C., & Suppes, P. (1955). Outlines of a Formal Theory of Value, I. Philosophy of Science, 22(2), 140–60.
  • de Finetti, B. (1937). Foresight: Its Logical Laws, Its Subjective Sources. In H. E. Kyburg, & H. E. K. Smokler (Eds.) Studies in Subjective Probability. Huntingdon, N. Y.: Robert E. Kreiger Publishing Co.
  • Ramsey, F. P. (1931). Truth and Probability. The Foundations of Mathematics and Other Logical Essays, (pp. 156–198).

Saturday, 2 February 2019

Credences in vague propositions: supervaluationist semantics and Dempster-Shafer belief functions

Safet is considering the proposition $R$, which says that the handkerchief in his pocket is red. Now, suppose we take red to be a vague concept. And suppose we favour a supervaluationist semantics for propositions that involve vague concepts. According to such a semantics, there is a set of legitimate precisifications of the concept red, and a proposition that involves that concept is true if it is true relative to all legitimate precisifications, false if false relative to all legitimate precisifications, and neither if true relative to some and false relative to others. So London buses are red is true, Daffodils are red is false, and Cherry blossom is red is neither.

Safet is assigning a credence to $R$ and a credence to its negation $\overline{R}$. He assigns 20% to $R$ and 20% to $\overline{R}$. Normally, we'd say that he is irrational, since his credences in mutually exclusive and exhaustive propositions don't sum to 100%. What's more, we'd demonstrate his irrationality using either

(i) a sure loss betting argument, which shows there is a finite series of bets, each of which his credences require him to accept but which, taken together, are guaranteed to lose him money; or

(ii) an accuracy argument, which shows that there are alternative credences in those two propositions that are guaranteed to be closer to the ideal credences.

However, in Safet's case, both arguments fail.

Take the sure loss betting argument first. According to that, my credences require me to sell a £100 bet on $R$ for £30 and sell a £100 bet on $\overline{R}$ for £30. Thus, I will receive £60 from the sale of these two bets. Usually the argument proceeds by noting that, however the world turns out, either $R$ is true or $\overline{R}$ is true. So I will have to pay out £100 regardless. And I'm therefore guaranteed to lose £40 overall. But, in a supervaluationist semantics, this assumption isn't true. If Safet's handkerchief is a sort of pinkish colour, $R$ will be neither true nor false, and $\overline{R}$ will be neither true nor false. So I won't have to pay out either bet, and I'll gain £60 overall.

Next, take the accuracy argument. According to that, my credences are more accurate the closer they lie to the ideal credences; and the ideal credence in a true proposition is 100% while the ideal credence in a proposition that isn't true is 0%. Then, given that the measure of distance between credence functions has a particular property, then we usually show that there are alternative credences in $R$ and $\overline{R}$ that are closer to each set of ideal credences than Safet's are. For instance, if we measure the distance between two credence functions $c$ and $c'$ using the so-called squared Euclidean distance, so that $$SED(c, c') = (c(R) - c'(R))^2 + (c(\overline{R}) - c'(\overline{R}))^2$$ then credences of 50% in both $R$ and $\overline{R}$ are guaranteed to be closer than Safet's to the credences of 100% in $R$ and 0% in $\overline{R}$, which are ideal if $R$ is true, and closer than Safet's to the credences of 0% in $R$ and 100% in $\overline{R}$, which are ideal if $\overline{R}$ is true. Now, if $R$ is a classical proposition, then this covers all the bases--either $R$ is true or $\overline{R}$ is. But since $R$ has a supervaluationist semantics, there is a further possibility. After all, if Safet's handkerchief is a sort of pinkish colour, $R$ will be neither true nor false, and $\overline{R}$ will be neither true nor false. So the ideal credences will be 0% in $R$ and 0% in $\overline{R}$. And 50% in $R$ and 50% in $\overline{R}$ is not closer than Safet's to those credences. Indeed, Safet's are closer.

So our usual arguments that try to demonstrate that Safet is irrational fail. So what happens next? The answer was given by Jeff Paris ('A Note on the Dutch Book Method'). He argued that the correct norm for Safet is not Probabilism, which requires that his credence function is a probability function, and therefore declares him irrational. Instead, it is Dempster-Shaferism, which requires that his credence function is a Dempster-Shafer belief function, and therefore declares him rational. To establish this, Paris showed how to tweak the standard sure loss betting argument for Probabilism, which depends on a background logic that is classical, to give a sure loss betting argument for Dempster-Shaferism, which depends on a background logic that comes from the supervaluationist semantics. To do this, he borrowed an insight from Jean-Yves Jaffray ('Coherent bets under partially resolving uncertainty and belief functions'). Robbie Williams then appealed to Jaffray's theorem to tweak the accuracy argument for Probabilism to give an accuracy argument for Dempster-Shaferism ('Generalized Probabilism: Dutch Books and Accuracy Domination'). However, Jaffray's result doesn't explicitly mention supervaluationist semantics. And neither Paris nor Williams fill in the missing details. So I thought it might be helpful to lay out those details here.

I'll start by sketching the argument. Then I'll go into the mathematical detail. So first, the law of credences that we'll be justifying. We begin with a definition. Throughout we'll consider only credence functions on a finite Boolean algebra $\mathcal{F}$. We'll represent the propositions in $\mathcal{F}$ as subsets of a set of possible worlds.

Definition (belief function) Suppose $c : \mathcal{F} \rightarrow [0, 1]$. Then $c$ is a Dempster-Shafer belief function if
  • (DS1a) $c(\bot) = 0$
  • (DS1b) $c(\top) = 1$
  • (DS2) For any proposition $A$ in $\mathcal{F}$,$$c(A) \geq \sum_{B \subsetneqq A} (-1)^{|A-B|+1}c(B)$$
Then we state the law:

Dempster-Shaferism $c$ should be a D-S belief function.

Now, suppose $Q$ is a set of legitimate precisifications of the concepts that are involved in the propositions in $\mathcal{F}$. Essentially, $Q$ is a set of functions each of which takes a possible world and returns a classically consistent assignment of truth values to the propositions in $\mathcal{F}$. Given a possible world $w$, let $A_w$ be the strongest proposition that is true at $w$ on all legitimate precisifications in $Q$. If $A = A_w$ for some world $w$, we say that $A$ is a state description for $w$.

Definition (belief function$^*$) Suppose $c : \mathcal{F} \rightarrow [0, 1]$. Then $c$ is a Dempster-Shafer belief function$^*$ relative to a set of precisifications if $c$ is a Dempster-Shafer belief function and
  • (DS3) For any proposition $A$ in $\mathcal{F}$ that is not a state description for any world, $$c(A) = \sum_{B \subsetneqq A} (-1)^{|A-B|+1}c(B)$$
Dempster-Shaferism$^*$ $c$ should be a Dempster-Shafer belief function$^*$.

It turns out that Dempster-Shaferism$^*$ is the strongest credal norm that we can justify using sure loss betting arguments and accuracy arguments. The sure loss betting argument is based on the following assumption: Let's say that a £$S$ bet on a proposition $A$ pays out £$S$ if $A$ is true and £0 otherwise. Then if your credence in $A$ is $p$, then you are required to pay anything less than £$pS$ for a £$S$ bet on $A$. With that in hand, we can show that you are immune to a sure loss betting arrgument iff your credence function is a Dempster-Shafer belief function$^*$. That is, if your credence function violates Dempster-Shaferism$^*$, then there is a finite set of bets on propositions in $\mathcal{F}$ such that (i) your credences require you to accept each of them, and (ii) together, they lose you money in all possible worlds. If your credence function satisfies Dempster-Shaferism$^*$, there is no such set of bets.

The accuracy argument is based on the following assumption: The ideal credence in a proposition at a world is 1 if that proposition is true at the world, and 0 otherwise; and the distance from one credence function to another is measured by a particular sort of measure called a Bregman divergence. With that in hand, we can show that you are immune to an accuracy dominance argument iff your credence function is a Dempster-Shafer belief function$^*$. That is, if your credence function violates Dempster-Shaferism$^*$, then there is an alternative credence function that is closer to the ideal credence function than yours at every possible world. If your credence function satisfies Dempster-Shaferism$^*$, there is no such alternative.

So much for the sketch of the arguments. Now for some more details. Suppose $c : \mathcal{F} \rightarrow [0, 1]$ is a credence function defined on the set of propositions $\mathcal{F}$. Often, we don't have to assume anything about $\mathcal{F}$, but in the case we're considering here, we must assume that it is a finite Boolean algebra. In both sure loss arguments and accuracy arguments, we need to define a set of functions, one for each possible world. In the sure loss arguments, these specify when certain bets payout; in the accuracy arguments, they specify the ideal credences. In the classical case and in the supervaluationist case that we consider here, they coincide. Given a possible world $w$, we abuse notation and write $w : \mathcal{F} \rightarrow \{0, 1\}$ for the following function:
  • $w(A) = 1$ if $X$ is true at $w$---that is, if $A$ is true on all legitimate precisifications at $w$;
  • $w(A) = 0$ if $X$ is not true at $w$---that is, if $A$ is false on some (possibly all) legitimate precisifications at $w$.
Then, given our assumptions, we have that a £$S$ bet on $A$ pays out £$Sw(A)$ at $w$; and we have that $w(A)$ is the ideal credence in $A$ at $w$. Now, let $\mathcal{W}$ be the set of these functions. And let $\mathcal{W}^+$ be the convex hull of $\mathcal{W}$. That is, $\mathcal{W}^+$ is the smallest convex set that contains $\mathcal{W}$. In other words, $\mathcal{W}^+$ is the set of convex combinations of the functions in $\mathcal{W}$. There is then a general result that says that $c$ is vulnerable to a sure loss betting argument iff $c$ is not in $\mathcal{W}^+$. And another general result that says that $c$ is accuracy dominated iff $c$ is not in $\mathcal{W}^+$. To complete our argument, therefore, we must show that $\mathcal{W}^+$ is precisely the set of Dempster-Shafer belief functions$^*$. That's the central purpose of this post. And that's what we turn to now.

We start with some definitions that allow us to given an alternative characterization of the Dempster-Shafer belief functions and belief functions$^*$.

Definition (mass function) Suppose $m : \mathcal{F} \rightarrow [0, 1]$. Then $m$ is a mass function if
  • (M1) $m(\bot) = 0$
  • (M2) $\sum_{A \in \mathcal{F}} m(A) = 1$
Definition (mass function$^*$) Suppose $m : \mathcal{F} \rightarrow [0, 1]$. Then $m$ is a mass function$^*$ relative to a set of precisifications if $m$ is a mass function and
  • (M3) For any proposition $A$ in $\mathcal{F}$ that is not the state description of any world, $m(A) = 0$.
Definition ($m$ generates $c$) If $m$ is a mass function and $c$ is a credence function, we say that $m$ generates $c$ if, for all $A$ in $\mathcal{F}$, $$c(A) = \sum_{B \subseteq A} m(B)$$ That is, a mass function generates a credence function iff the credence assigned to a proposition is the sum of the masses assigned to the propositions that entail it.

Theorem 1
  • $c$ is a Dempster-Shafer belief function iff there is a mass function $m$ that generates $c$.
  • $c$ is a Dempster-Shafer belief function$^*$ iff there is a mass function$^*$ $m$ that generates $c$.
Proof of Theorem 1 Suppose $m$ is a mass function that generates $c$. Then it is straightforward to verify that $c$ is a D-S belief function. Suppose $c$ is a D-S belief function. Then let$$m(A) = c(A) - \sum_{B \subsetneqq A} (-1)^{|A-B|+1}c(B)$$ This is positive, since $c$ is a belief function. It is then straightforward to verify that $m$ is a mass function. And it is straightforward to see that $m(A) = 0$ iff $c(A) = \sum_{B \subsetneqq A} (-1)^{|A-B|+1}c(B)$. That completes the proof.

Theorem 2 $c$ is in $\mathcal{W}^+$ iff $c$ is a Dempster-Shafer belief function$^*$.

Proof of Theorem 2 Suppose $c$ is in $\mathcal{W}^+$. So $c(-) = \sum_{w \in \mathcal{W}} \lambda_w w(-)$. Then:
  • if $A$ is the state description for world $w$ (that is, $A = A_w$), then let $m(A) = m(A_w) = \lambda_w$;
  • if $A$ is not a state description of any world, then let $m(A) = 0$.
Then $m$ is a mass function$^*$. And $m$ generates $c$. So $c$ is a Dempster-Shafer belief function$^*$.

Suppose $c$ is a Dempster-Shafer belief function$^*$ generated by a mass function$^*$ $m$. Then for world $w$, let $\lambda_w = m(A_w)$. Then $c(-) = \sum_{w \in \mathcal{W}} \lambda_w w(-)$. So $c$ is in $\mathcal{W}^+$.

This completes the proof. And with the proof we have the sure loss betting argument and the accuracy dominance argument for Dempster-Shaferism$^*$ when the propositions about which you have an opinion are governed by a supervaluationist semantics.