Many applications like audio and image processing show that sparse representations are a powerful and efficient signal modeling technique. Finding an optimal dictionary that generates at the same time the sparsest representations of data and the smallest approximation error is a hard problem approached by dictionary learning (DL). We study how DL performs in detecting abnormal samples in a dataset of signals. In this paper we use a particular DL formulation that seeks uniform sparse representations model to detect the underlying subspace of the majority of samples in a dataset, using a K-SVD-type algorithm. Numerical simulations show that one can efficiently use this resulted subspace to discriminate the anomalies over the regular data points.
翻译:音频和图像处理等许多应用显示,很少显示是一种强大而高效的信号建模技术。找到一种最佳字典,同时生成最稀少的数据和最小近似错误,这是字典学习(DL)所处理的一个难题。我们研究了DL在探测信号数据集中的异常样品时是如何表现的。在本文中,我们使用一种特定的DL配方,即寻求单一的稀释模型,用K-SVD型算法来探测数据集中大多数样品的子空间。数字模拟表明,人们能够有效地利用它导致子空间对正常数据点的异常进行区分。