Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.
翻译:由不同随机初始化(称为深海集合)所培训的神经网络重量集合, 实现最先进的精确度和校准。 最近推出的批量集合提供了更高效参数的投进替换。 在本文中, 我们不仅设计了超重的集合, 而且还设计了超重的集合, 以改善两种环境的艺术状态。 为了在预算之外的最佳性能, 我们提议了超深层集合, 这是一种简单的程序, 涉及对不同的超参数进行随机搜索, 它本身在多个随机初始化中分层。 它的强性能凸显了将模型与重量和超分度多样性相结合的好处。 我们进一步建议了一个高效参数版本, 超重集合的集合, 建立在批装和自调网络的层结构上。 我们方法的计算和记忆成本明显低于典型的集合。 在图像分类任务方面, 我们的方法在深层和分批的集合上都得到了改进。