Bayes factors are an increasingly popular tool for indexing evidence from experiments. For two competing population models, the Bayes factor reflects the relative likelihood of observing some data under one model compared to the other. In general, computing a Bayes factor is difficult, because computing the marginal likelihood of each model requires integrating the product of the likelihood and a prior distribution on the population parameter(s). In this paper, we develop a new analytic formula for computing Bayes factors directly from minimal summary statistics in repeated-measures designs. This work is an improvement on previous methods for computing Bayes factors from summary statistics (e.g., the BIC method), which produce Bayes factors that violate the Sellke upper bound of evidence for smaller sample sizes. The new approach taken in this paper extends requires knowing only the $F$-statistic and degrees of freedom, both of which are commonly reported in most empirical work. In addition to providing computational examples, we report a simulation study that benchmarks the new formula against other methods for computing Bayes factors in repeated-measures designs. Our new method provides an easy way for researchers to compute Bayes factors directly from a minimal set of summary statistics, allowing users to index the evidential value of their own data, as well as data reported in published studies.
翻译:暂无翻译