Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based DNN regularization methods, convex penalties are typically considered because of their optimization guarantees. Recent theoretical work have shown that nonconvex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing nonconvex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties with DNN analyses of seven datasets.
翻译:在深层学习神经网络(DNN)中,经常采用正规化方法,以防止过度适应。对于基于惩罚的DNN正规化方法,一般会根据其优化保障考虑罚款。最近的理论工作表明,满足某些常规性条件的非常规处罚也保证与标准优化算法运行良好。我们在本文件中审查了对DNN正规化的新的和目前存在的非常规处罚。我们为新的处罚提供了理论依据,并通过对七个数据集的DNN分析评估所有处罚的履行情况。