Finding the model that best describes a high dimensional dataset is a daunting task. For binary data, we show that this becomes feasible when restricting the search to a family of simple models, that we call Minimally Complex Models (MCMs). These are spin models, with interactions of arbitrary order, that are composed of independent components of minimal complexity (Beretta et al., 2018). They tend to be simple in information theoretic terms, which means that they are well-fitted to specific types of data, and are therefore easy to falsify. We show that Bayesian model selection restricted to these models is computationally feasible and has many other advantages. First, their evidence, which trades off goodness-of-fit against model complexity, can be computed easily without any parameter fitting. This allows selecting the best MCM among all, even though the number of models is astronomically large. Furthermore, MCMs can be inferred and sampled from without any computational effort. Finally, model selection among MCMs is invariant with respect to changes in the representation of the data. MCMs portray the structure of dependencies among variables in a simple way, as illustrated in several examples, and thus provide robust predictions on dependencies in the data. MCMs contain interactions of any order between variables, and thus may reveal the presence of interactions of order higher than pairwise.
翻译:对于二进制数据来说,我们显示,如果将搜索限制在简单模型的组合中,这才成为可行的,我们称之为“最小复杂模型 ” ( MMCM) 。 这些是自旋模型,由任意顺序的相互作用组成,由最不复杂的独立组成部分组成(Beretta等人,2018年)。它们往往在信息理论术语中简单,这意味着它们适合特定类型的数据,因此容易伪造。我们显示,这些模型中仅限于这些模型的巴伊西亚模型选择在计算上是可行的,并且具有许多其他优势。首先,它们的证据,与模型复杂性相比,是优于最合适的,可以轻易地加以计算。这样就可以在所有人中选择最佳的 MCM,尽管模型的数量是天文化的很大。此外,在信息理论术语中可以进行推断和抽样,而无需任何计算努力。最后,在数据表述的变化方面,模式选择是错的。 MCM 显示,这些模型在计算中描述比数据复杂性更高的结构结构的模型结构,因此可以不精确地显示,因此,在各种动态的变量中提供可靠的变量,从而显示,显示,在各种可靠的相互作用的可靠变数中,从而显示,在数据中可以显示。