Overparameterized deep networks have the capacity to memorize training data with zero \emph{training error}. Even after memorization, the \emph{training loss} continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, it is hard to tune their hyperparameters in order to maintain a fixed/preset level of training loss. We propose a direct solution called \emph{flooding} that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the \emph{flood level}. Our approach makes the loss float around the flood level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flood level. This can be implemented with one line of code and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to "random walk" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.
翻译:超度深层网络有能力以零/ emph{ training 错误来记住培训数据 。 即使在记忆化后, \ emph{ train loss} 仍然接近零, 使模型过于自信, 测试性能退化。 由于现有的正规化者并不直接旨在避免零培训损失, 很难调整他们的超参数, 以保持固定/ 预设的培训损失水平 。 我们提议了一个叫做 emph{ floding} 的直接解决方案, 以在培训损失达到相当小的价值( 我们称之为 emph{ flood 水平 ) 时, 故意防止进一步减少培训损失 。 我们的方法是让损失在洪水水平上浮动, 如果培训损失低于洪水水平, 则像通常一样像梯度一样, 以迷你式梯度下降 。 这可以用一条代码来实施, 并且与任何随机优化优化的优化的优化器和其他调节器 。 由于洪水, 模型会继续“ 随机行走” 与非零培训损失一样, 我们期望它会漂移到一个地区,, 以一个平坦的地地区, 导致 双级的侵蚀。