Optimization plays a key role in the training of deep neural networks. Deciding when to stop training can have a substantial impact on the performance of the network during inference. Under certain conditions, the generalization error can display a double descent pattern during training: the learning curve is non-monotonic and seemingly diverges before converging again after additional epochs. This optimization pattern can lead to early stopping procedures to stop training before the second convergence and consequently select a suboptimal set of parameters for the network, with worse performance during inference. In this work, in addition to confirming that double descent occurs with small datasets and noisy labels as evidenced by others, we show that noisy labels must be present both in the training and generalization sets to observe a double descent pattern. We also show that the learning rate has an influence on double descent, and study how different optimizers and optimizer parameters influence the apparition of double descent. Finally, we show that increasing the learning rate can create an aliasing effect that masks the double descent pattern without suppressing it. We study this phenomenon through extensive experiments on variants of CIFAR-10 and show that they translate to a real world application: the forecast of seizure events in epileptic patients from continuous electroencephalographic recordings.
翻译:优化在深神经网络的培训中起着关键作用。 决定何时停止培训会对网络在推断期间的性能产生重大影响。 在某些条件下, 常规错误在培训期间可以显示双向下降模式: 学习曲线是非单向的, 在新的时代之后, 在相近之前似乎有差异。 这种优化模式可以导致在第二次趋同之前早期停止培训的程序, 从而在二次趋同之前为网络选择不最优化的一组参数, 在推断期间表现更差。 在这项工作中, 除了确认在小型数据集和杂音标签中出现双向下降之外, 我们还表明, 在培训和一般分类中必须同时出现噪音标签, 以观察双向下降模式。 我们还表明, 学习速度对双向下降产生影响, 并研究不同的优化和优化参数如何影响双向下降的适应。 最后, 我们表明, 提高学习速度可以产生一种别种效果, 掩盖双向下降模式, 而无需抑制它。 我们通过对CIFAR- 10 10 号的不断电图病人进行的广泛实验, 并显示他们将真实的电图记录翻译成世界记录。