Large neural network models have high predictive power but may suffer from overfitting if the training set is not large enough. Therefore, it is desirable to select an appropriate size for neural networks. The destructive approach, which starts with a large architecture and then reduces the size using a Lasso-type penalty, has been used extensively for this task. Despite its popularity, there is no theoretical guarantee for this technique. Based on the notion of minimal neural networks, we posit a rigorous mathematical framework for studying the asymptotic theory of the destructive technique. We prove that Adaptive group Lasso is consistent and can reconstruct the correct number of hidden nodes of one-hidden-layer feedforward networks with high probability. To the best of our knowledge, this is the first theoretical result establishing for the destructive technique.
翻译:大型神经网络模型具有很高的预测力,但如果训练设备不够大,则可能受到过度改造。 因此,最好为神经网络选择一个适当的尺寸。 破坏性方法首先从一个大型建筑开始,然后用激光索型的罚款缩小其尺寸,已经广泛用于这项任务。 尽管它很受欢迎,但这一技术没有理论保障。 基于最小神经网络的概念, 我们为研究破坏性技术的无药可治理论提出了一个严格的数学框架。 我们证明适应性拉索组是一致的, 并且可以极有可能重建隐藏的单层饲料转发网络的正确节点。 据我们所知, 这是确定破坏性技术的第一个理论结果。