Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.
翻译:由不同随机初始化(称为深海集合)所培训的神经网络重量集合, 实现最先进的精确度和校准。 最近推出的批量集合提供了更高效参数的投进替换。 在本文中, 我们不仅设计了超重的集合, 而且还设计了超重的集合, 以改善两种环境的艺术状态。 为了在不依赖预算的情况下取得最佳性能, 我们提议了超深层集合, 这是一种简单的程序, 涉及对不同的超参数进行随机搜索, 本身在多个随机初始化中进行分解。 它的强大性能凸显了将模型与重量和超分量参数多样性相结合的好处。 我们进一步提出一个高效参数版本, 超重集合集合, 其建立在批装和自调网络的层结构上。 我们方法的计算和记忆成本明显低于典型的套件。 在图像分类任务上, 我们用 MLP、 LeNet、 ResNet 20 和宽度 ResNet 28- 10 建筑, 改进了深层和 批装件。