Deep neural networks represent the gold standard for image classification. However, they usually need large amounts of data to reach superior performance. In this work, we focus on image classification problems with a few labeled examples per class and improve data efficiency by using an ensemble of relatively small networks. For the first time, our work broadly studies the existing concept of neural ensembling in domains with small data, through extensive validation using popular datasets and architectures. We compare ensembles of networks to their deeper or wider single competitors given a total fixed computational budget. We show that ensembling relatively shallow networks is a simple yet effective technique that is generally better than current state-of-the-art approaches for learning from small datasets. Finally, we present our interpretation according to which neural ensembles are more sample efficient because they learn simpler functions.
翻译:深神经网络代表图像分类的金质标准。 但是, 它们通常需要大量的数据才能达到优异的性能。 在这项工作中, 我们注重图像分类问题, 给每类贴上几个标签的例子, 并通过使用相对小的网络组合来提高数据效率。 我们的工作首次广泛研究神经聚合的现有概念, 利用流行的数据集和结构进行广泛的验证。 我们比较网络组合与其更深或更广大的单一竞争者, 以总固定的计算预算来计算。 我们显示, 组成相对浅的网络是一种简单而有效的技术, 通常比目前从小数据集中学习的最新方法要好。 最后, 我们介绍了我们的解释, 即神经组合由于学习更简单的功能而更能进行抽样。