Based on its great successes in inference and denosing tasks, Dictionary Learning (DL) and its related sparse optimization formulations have garnered a lot of research interest. While most solutions have focused on single layer dictionaries, the recently improved Deep DL methods have also fallen short on a number of issues. We hence propose a novel Deep DL approach where each DL layer can be formulated and solved as a combination of one linear layer and a Recurrent Neural Network, where the RNN is flexibly regraded as a layer-associated learned metric. Our proposed work unveils new insights between the Neural Networks and Deep DL, and provides a novel, efficient and competitive approach to jointly learn the deep transforms and metrics. Extensive experiments are carried out to demonstrate that the proposed method can not only outperform existing Deep DL, but also state-of-the-art generic Convolutional Neural Networks.
翻译:根据其在推断和缩减任务方面的巨大成功,词典学习(DL)及其相关的稀有优化配方引起了许多研究兴趣。虽然大多数解决办法都集中在单一层词典上,但最近改进的深层DL方法也在许多问题上落空。因此,我们提出一个新的深层DL方法,其中每个DL层可以作为一个线性层和经常性神经网络的组合来制定和解决,其中RN可以灵活地重新定级为一层相关知识性指标。我们的拟议工作揭示了神经网络和深层DL之间的新洞察力,为共同学习深层变异和度提供了新的、高效和竞争性的方法。我们进行了广泛的实验,以证明拟议的方法不仅能够超越现有的深层 DL,而且能够超越现有的最先进的通用革命神经网络。