Finding the model that best describes a high-dimensional dataset is a daunting task, even more so if one aims to consider all possible high-order patterns of the data, going beyond pairwise models. For binary data, we show that this task becomes feasible when restricting the search to a family of simple models, that we call Minimally Complex Models (MCMs). MCMs are maximum entropy models that have interactions of arbitrarily high order grouped into independent components of minimal complexity. They are simple in information-theoretic terms, which means they can only fit well certain types of data patterns and are therefore easy to falsify. We show that Bayesian model selection restricted to these models is computationally feasible and has many advantages. First, the model evidence, which balances goodness-of-fit against complexity, can be computed efficiently without any parameter fitting, enabling very fast explorations of the space of MCMs. Second, the family of MCMs is invariant under gauge transformations, which can be used to develop a representation-independent approach to statistical modeling. For small systems (up to 15 variables), combining these two results allows us to select the best MCM among all, even though the number of models is already extremely large. For larger systems, we propose simple heuristics to find optimal MCMs in reasonable times. Besides, inference and sampling can be performed without any computational effort. Finally, because MCMs have interactions of any order, they can reveal the presence of important high-order dependencies in the data, providing a new approach to explore high-order dependencies in complex systems. We apply our method to synthetic data and real-world examples, illustrating how MCMs portray the structure of dependencies among variables in a simple manner, extracting falsifiable predictions on symmetries and invariance from the data.
翻译:暂无翻译