Dictionary learning aims to find a dictionary under which the training data can be sparsely represented, and it is usually achieved by iteratively applying two stages: sparse coding and dictionary update. Typical methods for dictionary update focuses on refining both dictionary atoms and their corresponding sparse coefficients by using the sparsity patterns obtained from sparse coding stage, and hence it is a non-convex bilinear inverse problem. In this paper, we propose a Rank-One Matrix Decomposition (ROMD) algorithm to recast this challenge into a convex problem by resolving these two variables into a set of rank-one matrices. Different from methods in the literature, ROMD updates the whole dictionary at a time using convex programming. The advantages hence include both convergence guarantees for dictionary update and faster convergence of the whole dictionary learning. The performance of ROMD is compared with other benchmark dictionary learning algorithms. The results show the improvement of ROMD in recovery accuracy, especially in the cases of high sparsity level and fewer observation data.
翻译:字典学习的目的是找到一种词典,使培训数据能够少用代表,通常通过迭接地应用两个阶段实现:少用编码和字典更新。字典更新的典型方法侧重于利用从稀疏编码阶段获得的宽度模式来精炼字典原子及其相应的稀有系数,因此这是一个非convex双线反向问题。在本文中,我们建议用一个排名一的矩阵分解算法,通过将这两个变量改造成一组一级矩阵,将这一挑战重新转化为共流问题。与文献中的方法不同,ROMD在使用convex编程时更新整个词典。因此,优势包括词典更新的趋同保证和整个字典学习的更快趋同。ROMD的性能与其他基准词典学习算法相比较。结果显示ROMD在恢复准确性方面有所改进,特别是在高度紧张和观测数据较少的情况下。