Arthur Paul Pedersen

"An Impossibility Result for any Theory of Non-Archimedean Subjective Expected Utility."

Theories of subjective expected utility standardly require their representing functions---probabilities, utilities, and expected utilities---to assume real numbers as their values. To meet this requirement, theories of subjective expected utility must impose so-called Archimedean conditions. Despite their technical indispensability to standard theories of subjective expected utility, Archimedean conditions are not sacrosanct. Numerous authors have argued that Archimedean conditions demand too much in the name of rationality, and many authors have gone a step further, arguing that rationality may even require that Archimedean conditions be violated in order to ensure conformity to purported norms of rationality that standard theories fail to enforce. In light of such considerations, various authors have developed theories of non-Archimedean probability and expected utility. These theories relax Archimedean conditions, thereby allowing probabilities, utilities, or expected utilities, as the case may be, to take values in one or another non-Archimedean ordered algebraic structure such as a proper totally ordered field extension of the field of real numbers.

In this article I examine the prospects for a general theory of non-Archimedean subjective expected utility. What is desired---and what I argue formal developments in non-Archimedean probability and expected utility have hitherto failed to deliver---is a theory unhampered by normatively unmotivated technical restrictions, one that abandons Archimedean conditions without unduly conceding generality. While it may be tempting to think that the foundations for a general theory can be built upon the groundwork of past developments in non-Archimedean probability and expected utility in a straightforward way, I show that, in a very precise sense, this is not the case.

I introduce three simple conditions that a general theory of non-Archimedean subjective expected utility might be plausibly required to satisfy, and I show that under specific elementary hypotheses reflecting ideas that have been invoked to motivate non-Archimedean subjective probability and expected utility, it is impossible for these three conditions to be jointly satisfied. This no-go result is a corollary of a general impossibility theorem that I prove under very weak assumptions formulated in the context of abstract algebra. I explain why the impossibility result has significant formal and philosophical implications for a general theory of non-Archimedean subjective expected utility.
Back to Works in Progress

"Representing Conditional Expectations by Non-Archimedean Expectations."

I show that for any real-valued conditional expectation function, there is a strictly positive, and possibly non-Archimedean, expectation function that represents the conditional expectation function in the sense that the conditional expectations determined by these functions are the same up to a positive infinitesimal. Strict positivity requires the expectation function to assign a positive numerical value to any random quantity that is in the function's domain and is state-wise non-negative but not state-wise identical to zero; thus, an expectation function is strictly positive just in case whenever one random quantity is at least as great as another nonidentical random quantity in every state, the expectation of the former random quantity is greater than the expectation of the latter random quantity if both random quantities belong to the domain of the expectation function. Strict positivity entails a well-known condition called regularity, which requires any event different from the impossible event to be assigned positive probability. An expectation function is non-Archimedean if the smallest totally ordered field to include its range is non-Archimedean and so includes positive infinitesimals, numbers smaller than every positive real number.

The representation theorem of this article extends a representation result for primitive conditional probabilities due to Peter Krauss (1968), later re-proved by Vann McGee (1994) and Joseph Halpern (2010). I argue that the representation theorem is too coarse-grained to capture the rich variety of systems of belief, value, and choice representable by non-Archimedean expectation functions. This coarse-grainedness notwithstanding, I argue that the representation theorem demonstrates that much of the formal apparatus of Bayesian decision theory and statistical inference can be preserved in a theory of non-Archimedean subjective expected utility.
Back to Works in Progress

"Strictly Coherent Preferences, No Holds Barred."

This article develops foundations for a general normative theory of subjective expected utility based on a system of axioms regulating preferences. The theory differs from standard theories of subjective expected utility in two significant ways. First, in addition to foregoing a number of regularity conditions (e.g., measurability, topological, or subsidiary cardinality restrictions), no Archimedean conditions, not even weakened Archimedean conditions, are imposed on an agent's preferences. Second, given the assumption of act-state independence, an agent's preferences are required to respect the principle of weak dominance. This principle requires an agent to reject any potential option that, by the agent's own lights, is possibly worse and certainly no better than an alternative option available for the agent to choose in a given decision problem. An agent's preferences are said to be strictly coherent if they satisfy the axioms proposed in this article. I show that an agent's preferences are strictly coherent if and only if they accord with a ranking by a subjective expected utility function taking values in a totally ordered (possibly non-Archimedean) field. Instead of adopting a routine ultraproduct construction to construct the totally ordered field, an extension of the classical Hahn Embedding Theorem is proven and thereupon employed to construct the desired totally ordered field in which the expected utility function is to take values. Numbers in the constructed field take a very simple form in terms of formal power series in a single infinitesimal, with addition and multiplication being defined by means of the familiar operations of addition and multiplication of power series and a total ordering being defined with the property that a non-zero power series is positive just in case the coefficient of the least exponent with a non-zero coefficient is positive (i.e., a lexicographic ordering).

The numerical representation theorem in terms of formal power series shows how, in general, (lexicographically) ordered vectorial representations (following the tradition of Hausner, Thrall, Fishburn, Blume et al., etc.) may be understood instead quite simply as ordered field representations with multiplication of vectors naturally defined by means of the familiar operation of multiplication of power series. Such a field representation sheds light not only upon how to multiply vectorial utilities and vectorial probabilities in agreement with traditional accounts in the expected utility paradigm but also upon how to define conditional lexicographic probability and expectation according to the standard ratio formula. Thus the developments of this article address concerns raised by Halpern (2010), serving to close what may appear to be a gap between ordered vectorial representations and ordered field representations.
Back to Works in Progress

"Dilation, Disintegrations, Dominance Principles, and Delayed Decisions" (with Gregory Wheeler).

For numerically sharp, or precise, probabilities, non-conglomerability occurs when conditioning on the outcome of an experiment uniformly reduces the probability of an event---no matter what the experimental outcome is. Non-conglomerability is remarkably similar to another phenomenon that can occur for numerically unsharp, or imprecise, probabilities, a generalization of the numerically sharp model based on sets of sharp probabilities rather than on single numerically sharp probabilities. This phenomenon, called \emph{dilation}, occurs when conditioning on the outcome of an experiment uniformly increases or dilates the spread between the lower and upper probabilities of an event generated by a set of probabilities---no matter what the experimental outcome is. A well-known result asserts that non-conglomerability entails a failure of disintegrability---that is, a failure of the law of total expectation---and that subject to mild regularity conditions, non-conglomerability is equivalent to a failure of disintegrability. Both non-conglomerability and dilation have been alleged to conflict with dominance principles legislative of rational decision making. Both have also been alleged to conflict with a fundamental principle that I.J. Good---and with less fanfare, others before him, including Ramsey and Savage---pointed out is an elementary consequence of Bayesian methodology---namely, the prescription to refuse to make a terminal decision between alternative courses of action in the face of an opportunity to delay the decision in order to first learn, at no cost, the outcome of an experiment relevant to the decision. In particular, both non-conglomerability and dilation have been alleged to permit or even mandate choosing to make a terminal decision in deliberate ignorance of cost-free information relevant to the decision.

Although dilation and non-conglomerability share some similarities, some authors maintain that there are important differences between the two that warrant endorsing different normative positions regarding dilation and non-conglomerability---which itself is intimately tied to the extent to which probabilities fail to be additive and in particular countably additive. This article reassesses the grounds for treating dilation and non-conglomerability differently. In particular, we argue that a number of authors (e.g., Seidenfeld, Walley) endorse positions concerning dilation and non-conglomerability that are philosophically untenable. Our analysis exploits a new and general characterization result for dilation to draw a closer connection between dilation and non-conglomerability.
Back to Works in Progress

"Strictly Coherent Choice under Uncertainty."

This article extends the axiomatic foundations for the theory expounded in``Strictly Coherent Preferences, No Holds Barred'' to the setting of severe uncertainty, where judgments of probability, utility, and expected utility are not required to have the structure of a weak ordering---that is, in short, to the setting of indeterminate probabilities. I advance axiomatic foundations for a normative theory of choice based on a system of axioms regulating judgments of acceptability of feasible options. As in the previous article, judgments of acceptability are required to respect the principle of weak dominance, but they are not required to satisfy Archimedean conditions. Say that an agent's judgments of acceptability are strictly coherent if they satisfy the axioms proposed in this article. I show that an agent's judgments of acceptability are strictly coherent just in case there is utility function and a family of regular (possibly non-Archimedean) probability functions such that for each potential decision problem, an option is judged to be acceptable (unacceptable) for choice in the decision problem if and only if it is Bayes-admissible (Bayes-inadmissible) for some (no) probability function from the family of probability functions---that is, it maximizes (fails to maximize) subjective expected utility with respect to some probability function from the family of probability functions. The notion of Bayes-admissibility corresponds to the notion of admissibility studied in Seidenfeld et al. (2010) for indeterminate (real-valued) probabilities; like their theory, strictly coherent choice distinguishes between each pair of families of probabilities.
Back to Works in Progress

"Full Belief and Partial Belief" (for Handbook of Formal Epistemology, R. Pettigrew and J. Weisberg, eds.).

While the notion of ungraded belief---also called called flat-out belief, binary belief, all-or-nothing belief, or qualitative belief---has been central to traditional epistemology, the notion of graded belief---also called degree of belief, degree of confidence, credence, or partial belief---has been the predominant notion of philosophical interest in Bayesian epistemology. This handbook entry provides a systematic overview of the central questions that have been raised about graded and ungraded belief, a critical survey of the variety of answers that have been given to these questions, and an assessment of the outlook for future research on graded and ungraded belief.
Back to Works in Progress

"Optimal Stopping for Sampling in Decisions from Experience" (with Ralph Hertwig).

In many decisions, people do not have access to explicit information relevant for assessing the risks involved in choosing among prospects. When people do not have access to explicit information about outcomes or their probabilities, they may resort to sampling their environment to acquire information relevant for assessing the risks involved in order to arrive at a decision. Decisions of this kind are called decisions from experience, which contrast with decisions from description, where outcomes and their probabilities are explicitly provided. Choice behavior in decisions from description and decisions from experience can differ dramatically (Hertwig et al. 2004). In this article, we analyze optimal stopping for sampling in decisions from experience. Our analysis seeks to provide an appropriate normative benchmark for empirical research in decisions from experience. We examine optimal stopping in sampling models for (i) numerically precise probabilities and (ii) numerically indeterminate probabilities, in order of increasing plausibility for appropriate normative benchmarks.
Back to Works in Progress

"Descriptive Decision Theory" (for Stanford Encyclopedia of Philosophy, E. Zalta, ed., with Jake Chandler).

Descriptive decision theory is a field of inquiry that aims to \emph{describe}, in some sense, how human beings make decisions. Traditionally descriptive decision theory is distinguished from normative decision theory, which aims to uncover how human beings should, in some sense, make decisions, while both descriptive decision theory and normative decision theory are distinguished from prescriptive decision theory, which aims to identify ways for human beings to improve, in some sense, their decisions. This entry discusses the central philosophical themes, debates, and problems in descriptive decision theory. While this tripartite distinction seems simple enough on the face of it, a closer examination will reveal blurred lines.
Back to Works in Progress