Updating by maximizing expected accuracy in infinite non-partitional settings
Kenny Easwaran (Irvine)

Greaves and Wallace (2006) justify Bayesian conditionalization as the update plan that maximizes expected accuracy, for an agent considering finitely many possibilities, who is about to undergo a learning event where the potential propositions that she might learn form a partition. In recent years, several philosophers have generalized this argument to less idealized circumstances. Some authors (Easwaran (2013b); Nielsen (2022)) relax finiteness, while others (Carr (2021); Gallow (2021); Isaacs and Russell (2022); Schultheis (2023)) relax partitionality. In this paper, we show how to do both at once. We give novel philosophical justifications of the use of σ-algebras in the infinite setting, and argue for a different interpretation of the "signals" in the non-partitional setting. We show that the resulting update plan mitigates some problems that arise when only relaxing finiteness, but not partitionality, such as the Borel-Kolmogorov paradox.