In this paper we derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning - the Method of Optimal Directions (MOD) and Online Dictionary Learning (ODL), which can also be thought of as approximative K-SVD. We show that given a well-behaved initialisation that is either within distance at most $1/\log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary. This is done even for data models with non-uniform distributions on the supports of the sparse coefficients. These allow the appearance frequency of the dictionary elements to vary heavily and thus model real data more closely.
翻译:在本文中,我们推导了两种流行的字典学习交替极小化算法(最优方向方法(Method of Optimal Directions)和在线字典学习(Online Dictionary Learning)即可以看作是近似K-SVD)的收敛充分条件。我们展示了在初始值很好时,无论初始字典与生成字典之间的距离小于$1/\log(K)$还是具有特殊结构,确保初始值中的每个元素只指向一个生成元素,两种算法都会以几何收敛速度收敛到生成字典。这是在具有非均匀稀疏系数支撑集分布的数据模型下完成的。这些分布可以使字典元素的出现频率大大变化,从而更接近真实数据的模型。