In Bayesian hypothesis testing, evidence for a statistical model is quantified by the Bayes factor, which represents the relative likelihood of observed data under that model compared to another competing model. In general, computing Bayes factors is difficult, as computing the marginal likelihood of data under a given model requires integrating over a prior distribution of model parameters. This paper builds on prior work that uses the BIC to compute approximate Bayes factors directly from summary statistics from common experimental designs (i.e., t-test and analysis of variance). Here, I capitalize on a particular choice of prior distribution that allows the Bayes factor to be expressed without integral representation (i.e., an analytic expression), leading to a relatively simple formula (the Pearson Bayes factor) that requires only minimal summary statistics commonly reported in scientific papers, such as the t or F score and the degrees of freedom. This provides applied researchers with the ability to compute exact Bayes factors from minimal summary data, and thus easily assess the evidential value of any data for which these summary statistics are provided, even when the original data is not available.
翻译:在Bayesian假设测试中,统计模型的证据是由Bayes系数量化的,这代表了该模型下观察到的数据相对于另一个竞争模型而言的相对可能性。一般来说,计算Bayes因素是困难的,因为计算某一模型下数据的边际可能性需要先对模型参数的分布进行整合。本文以以前的工作为基础,以前的工作是利用BIC直接从共同实验设计(即测试和差异分析)的汇总统计数据中计算近似Bayes因素。这里,我利用了一种特定的先前分配选择,这种选择使Bayes系数得以在没有整体表示(即分析表达)的情况下表达,导致一种相对简单的公式(Pearson Bayes系数),该公式只需要科学文件中通常报告的最低摘要统计数据,例如t或F的得分和自由度。这为应用研究人员提供了从最低限度的简要数据中计算准确贝斯系数的能力,从而便于评估提供这些摘要统计数据的任何数据的表面价值,即使原始数据没有提供。