Deep neural networks (NN) have achieved great success in many applications. However, why do deep neural networks obtain good generalization at an over-parameterization regime is still unclear. To better understand deep NN, we establish the connection between deep NN and a novel kernel family, i.e., Neural Optimization Kernel (NOK). The architecture of structured approximation of NOK performs monotonic descent updates of implicit regularization problems. We can implicitly choose the regularization problems by employing different activation functions, e.g., ReLU, max pooling, and soft-thresholding. We further establish a new generalization bound of our deep structured approximated NOK architecture. Our unsupervised structured approximated NOK block can serve as a simple plug-in of popular backbones for a good generalization against input noise.
翻译:深神经网络(NN)在许多应用中取得了巨大成功。然而,深神经网络为何在超参数化制度下获得良好的概括化,目前还不清楚。为了更深入地理解NN,我们建立了NN和新颖核心家庭(即Neal Offitimization Kernel(NOK))之间的联系。NOK结构近似结构架构对隐含的规范化问题进行了单向下降更新。通过使用不同的激活功能(如RELU、最大集合和软保持),我们可以隐含地选择规范化问题。我们进一步建立了我们深层结构近似NOK结构的新的集束。我们未受监督的近似NOK结构块可以作为大众骨干的一个简单插座,用于对输入噪音进行良好的普及。