We present convincing empirical evidence for an effective and general strategy for building accurate small models. Such models are attractive for interpretability and also find use in resource-constrained environments. The strategy is to learn the training distribution instead of using data from the test distribution. The distribution learning algorithm is not a contribution of this work; we highlight the broad usefulness of this simple strategy on a diverse set of tasks, and as such these rigorous empirical results are our contribution. We apply it to the tasks of (1) building cluster explanation trees, (2) prototype-based classification, and (3) classification using Random Forests, and show that it improves the accuracy of weak traditional baselines to the point that they are surprisingly competitive with specialized modern techniques. This strategy is also versatile wrt the notion of model size. In the first two tasks, model size is identified by number of leaves in the tree and the number of prototypes respectively. In the final task involving Random Forests the strategy is shown to be effective even when model size is determined by more than one factor: number of trees and their maximum depth. Positive results using multiple datasets are presented that are shown to be statistically significant. These lead us to conclude that this strategy is both effective, i.e, leads to significant improvements, and general, i.e., is applicable to different tasks and model families, and therefore merits further attention in domains that require small accurate models.
翻译:暂无翻译