Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals. Alternating minimization (AM) is standard for the underlying optimization, where gradient descent steps alternate with sparse coding procedures. The major drawback of this method is its prohibitive computational cost, making it unpractical on large real-world data sets. This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision. We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods. We show that unrolling performs better on the support of the inner problem solution and during the first iterations. Finally, we apply unrolling on pattern learning in magnetoencephalography (MEG) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method.
翻译:字典学习包括从吵闹的数据中找到一个稀少的代号,是将数据驱动的先前的信号知识编码的一种常见方式。 代用最小化( AM) 是基础优化的标准, 其梯度下降步骤与稀疏的编码程序交替。 这种方法的主要缺点是其令人望而生畏的计算成本, 使其在大型真实世界数据集上不切实际。 这项工作研究一种基于松动的词典学习的近似公式, 并将它比作交替最小化, 以找到速度和精确度之间的最佳取舍。 我们分析了两种方法中梯度估计的无药性行为和趋同率。 我们显示, 解动性在支持内部问题解决方案和第一次迭代期间表现得更好。 最后, 我们运用随机分析算法, 将性能与最新方法进行比较。