Choice of appropriate structure and parametric dimension of a model in the light of data has a rich history in statistical research, where the first seminal approaches were developed in 1970s, such as the Akaike's and Schwarz's model scoring criteria that were inspired by information theory and embodied the rationale called Occam's razor. After those pioneering works, model choice was quickly established as its own field of research, gaining considerable attention in both computer science and statistics. However, to date, there have been limited attempts to derive scoring criteria for simulator-based models lacking a likelihood expression. Bayes factors have been considered for such models, but arguments have been put both for and against use of them and around issues related to their consistency. Here we use the asymptotic properties of Jensen--Shannon divergence (JSD) to derive a consistent model scoring criterion for the likelihood-free setting called JSD-Razor. Relationships of JSD-Razor with established scoring criteria for the likelihood-based approach are analyzed and we demonstrate the favorable properties of our criterion using both synthetic and real modeling examples.
翻译:根据数据选择一个模型的适当结构和参数方面,在统计研究方面有着丰富的历史,1970年代开发了第一种基本方法,如Akaike's和Schwarz的模型评分标准,这些方法受到信息理论的启发,并体现了称为Occam的剃刀的理由。在这些开创性工作之后,迅速将模型选择确定为其本身的研究领域,在计算机科学和统计方面都得到了相当大的关注。然而,迄今为止,为模拟模型的评分标准而缺乏可能性表达的尝试有限。这些模型考虑了贝亚因素,但对这些因素的使用以及与其一致性有关的问题提出了论据。我们在这里使用詹森-桑诺差异的无症状特性(JSD-SD)来得出一个统一的模型评分标准,称为JSD-Razor。对JSD-Razor与基于可能性的方法的既定评分标准之间的关系进行了分析,我们用合成和真实的模型示例展示了我们的标准的有利特性。