Are the repeated sampling principle and Cournot’s principle frequentist?
  1. Marshall Abrams, philosophy (Alabama).
  2. Ruobin Gong, statistics (Rutgers).
  3. Alistair Wilson, philosophy (Birmingham).
  4. Harry Crane, statistics (Rutgers).

Historically, the probability calculus began with repeated trials: throws of a fair die, etc. Independent and identically distributed (iid) random variables still populate elementary textbooks, but most statistical models permit variation and dependence in probabilities. Already in 1816, Pierre Simon Laplace explained that nature does not follow any constant probability law; its error laws vary with the nature of measurement instruments and with all the circumstances that accompany them. In 1960, Jerzy Neyman explained that scientific applications had moved into a phase of dynamic indeterminism, in which stochastic processes replace iid models.
Statisticians who call themselves “frequentists” have proposed two competing principles to support the inferences about parameters in stochastic processes and other complex probability models. First, the repeated sampling principle: assess statistical procedures by their behavior in hypothetical repetitions under the same conditions. Second, Cournot’s principle: justify inferences by statements to which the model gives high probability. Cox and Hinkley coined the name “repeated sampling principle” in 1974. The name “Cournot’s principle” was first current in the 1950s. But both ideas are much older. When interpreted as pragmatic instructions, the two principles are more or less equivalent. But they can also be interpreted as philosophical justifications – even as explanations of the meaning of probability models – and then they seem very different. Questions for the panel include the following.

  1. The repeated sampling principle can be taken as saying that the meaning of a probability measure lies in the assumption that salient probabilities and expected values given by the measure will be replicated by frequencies and averages in hypothetical repetitions. Is this a frequentist interpretation of probability?
  2. Similarly, Cournot’s principle says that the meaning of a probability measure lies in the assumption that salient events with probability close to one will happen. Is this a frequentist interpretation of probability?
  3. From a philosophical perspective, do these two interpretations of probability differ?
  4. The game-theoretic foundation for probability generalizes Cournot’s principle to the case where nonstochastic explanatory or decision variables may be determined in the course of observing the data and be influenced by the values of earlier variables. Here the principle says that a salient betting strategy using the probabilities for the stochastic variables will not multiply its capital by a large factor. Is this principle frequentist?