Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse, and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off? Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple $\ell_2$ regularization is already positively contributing to such a perspective.
翻译:寻找深层次学习模式的最佳规模是非常实际的,而且具有广泛影响,特别是在节能计划中。最近,一个意想不到的现象,即“双向下降”已经引起了深层学习社区的注意。随着模式规模的扩大,业绩首先恶化,然后又回到了改进。它提出了关于最佳模式规模的严重问题,以保持高度的概括化:模式需要足够地超分,但增加过多的参数浪费培训资源。我们的工作能否以有效的方式找到最佳的权衡?我们的工作表明,双向下降现象有可能在学习问题的适当调节下被避免,但最后的答案还有待找到。我们从经验上认为,在复杂的情景下,有希望以适当的正规化来避免双向下降,因为简单的$\ell_2美元正规化已经对这一观点做出了积极贡献。</s>