In this paper, we propose an adaptive group Lasso deep neural network for high-dimensional function approximation where input data are generated from a dynamical system and the target function depends on few active variables or few linear combinations of variables. We approximate the target function by a deep neural network and enforce an adaptive group Lasso constraint to the weights of a suitable hidden layer in order to represent the constraint on the target function. We utilize the proximal algorithm to optimize the penalized loss function. Using the non-negative property of the Bregman distance, we prove that the proposed optimization procedure achieves loss decay. Our empirical studies show that the proposed method outperforms recent state-of-the-art methods including the sparse dictionary matrix method, neural networks with or without group Lasso penalty.
翻译:在本文中,我们提出一个高维功能近似值的适应性组合Lasso深神经网络,输入数据来自动态系统,而目标功能取决于少数活跃变量或少量的线性变量组合。我们通过深神经网络对目标功能进行近似,并将一个适应性组合Lasso对适当隐藏层的重量加以约束,以表示对目标功能的制约。我们利用准度算法优化受罚损失功能。我们使用布雷格曼距离的非负性属性,证明拟议的优化程序实现了损失衰减。我们的经验研究表明,拟议方法超越了最新的先进方法,包括稀有字典矩阵方法、有或无拉索组合惩罚的神经网络。