Deep neural networks (DNNs) achieve remarkable performance but often suffer from overfitting due to their high capacity. We introduce Momentum-Adaptive Gradient Dropout (MAGDrop), a novel regularization method that dynamically adjusts dropout rates on activations based on current gradients and accumulated momentum, enhancing stability in non-convex optimization landscapes. To theoretically justify MAGDrop's effectiveness, we derive a tightened PAC-Bayes generalization bound that accounts for its adaptive nature, achieving up to 20% sharper bounds compared to standard approaches by leveraging momentum-driven perturbation control. Empirically, the activation-based MAGDrop outperforms baseline regularization techniques, including standard dropout and adaptive gradient regularization, by 1-2% in test accuracy on MNIST (99.52%) and CIFAR-10 (90.63%), with generalization gaps of 0.48% and 7.14%, respectively. Our work bridges theoretical insights and practical advancements, offering a robust framework for enhancing DNN generalization suitable for high-stakes applications.
翻译:深度神经网络(DNNs)取得了卓越的性能,但由于其高容量,常常遭受过拟合问题。我们提出了动量自适应梯度丢弃(MAGDrop),这是一种新颖的正则化方法,它基于当前梯度和累积动量动态调整激活的丢弃率,从而增强了非凸优化景观中的稳定性。为了从理论上证明MAGDrop的有效性,我们推导了一个更紧的PAC-Bayes泛化界,该界考虑了其自适应特性,通过利用动量驱动的扰动控制,相比标准方法获得了高达20%的更紧界。实证结果表明,基于激活的MAGDrop在MNIST(99.52%)和CIFAR-10(90.63%)数据集上的测试准确率优于基线正则化技术(包括标准丢弃和自适应梯度正则化)1-2%,其泛化差距分别为0.48%和7.14%。我们的工作架起了理论洞见与实际进展之间的桥梁,为增强DNN泛化能力提供了一个适用于高风险应用的稳健框架。