Regularized methods have been widely applied to system identification problems without known model structures. This paper proposes an infinite-dimensional sparse learning algorithm based on atomic norm regularization. Atomic norm regularization decomposes the transfer function into first-order atomic models and solves a group lasso problem that selects a sparse set of poles and identifies the corresponding coefficients. The difficulty in solving the problem lies in the fact that there are an infinite number of possible atomic models. This work proposes a greedy algorithm that generates new candidate atomic models maximizing the violation of the optimality condition of the existing problem. This algorithm is able to solve the infinite-dimensional group lasso problem with high precision. The algorithm is further extended to reduce the bias and reject false positives in pole location estimation by iteratively reweighted adaptive group lasso and complementary pairs stability selection respectively. Numerical results demonstrate that the proposed algorithm performs better than benchmark parameterized and regularized methods in terms of both impulse response fitting and pole location estimation.
翻译:常规化的方法被广泛应用于没有已知模型结构的系统识别问题。 本文基于原子规范的正规化, 提出了一个无限的分散学习算法。 原子规范规范化将转移功能分解为一阶原子模型, 并解决了一组Lasso问题, 即选择了一组稀少的极, 并确定了相应的系数。 解决问题的困难在于存在无限数量的可能的原子模型。 这项工作提出了一种贪婪的算法, 产生新的候选原子模型, 最大限度地将违反现有问题的最佳性条件的行为最大化。 此算法能够以高度精确的方式解决无限的Lasso组问题。 该算法进一步扩展, 以通过迭代再加权的适应组 lasso 和互补的双对稳定性选择, 来减少极位置估计中的偏差, 并拒绝错误的正数。 数字结果表明, 拟议的算法在电动响应和极定位估计方面比基准参数化和常规化方法要好。