That science and other domains are now largely data-driven means virtually unlimited opportunities for statisticians. With great power comes responsibility, so it's imperative that statisticians ensure that the methods being developing to solve these problems are reliable. But reliable in what sense? This question is problematic because different notions of reliability correspond to distinct statistical schools of thought, each with their own philosophy and methodology, often giving different answers in applications. To achieve the goal of reliably solving modern problems, I argue that a balance in the behavioral-statistical priorities is needed. Towards this, I make use of Fisher's "underworld of probability" to motivate a new property called invulnerability that, roughly, requires the statistician to avoid the risk of losing money in a long-run sense. Then I go on to make connections between invulnerability and the more familiar behaviorally- and statistically-motivated notions, namely coherence and (frequentist-style) validity.
翻译:暂无翻译