In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms, for solving inverse problems and beyond. Unrolling is believed to incorporate the model-based prior with the learning capacity of deep learning. This paper revisits the role of unrolling as a design approach for deep networks: to what extent its resulting special architecture is superior, and can we find better? Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models. Among all possible variations, we focus on extensively varying the connectivity patterns and neuron types, leading to a gigantic design space arising from LISTA. To efficiently explore this space and identify top performers, we leverage the emerging tool of neural architecture search (NAS). We carefully examine the searched top architectures in a number of settings, and are able to discover networks that are consistently better than LISTA. We further present more visualization and analysis to "open the black box", and find that the searched top architectures demonstrate highly consistent and potentially transferable patterns. We hope our study to spark more reflections and explorations on how to better mingle model-based optimization prior and data-driven learning.
翻译:近些年来,在从不滚动的迭代算法中建立针对特定问题的深层次网络以解决反向问题和其他问题方面,取得了巨大成功。 据认为, 在深层学习的学习能力下, 将基于模型的网络纳入到深层的深层次网络中。 本文回顾了作为深层网络设计方法的滚动工具的作用: 由此而形成的特别架构在多大程度上优异,我们能找到更好的发现? 使用ListA 进行稀疏恢复作为有代表性的例子, 我们为无滚动模型进行第一次彻底的空间设计研究。 在各种可能的变异中, 我们侧重于广泛的连接模式和神经型类型, 导致由 ListA 产生巨大的设计空间。 为了高效地探索这一空间, 并识别顶级表演者, 我们利用新兴的神经结构搜索工具(NAS) 。 我们仔细检查了在一系列环境下搜索的顶级结构, 并能够发现比 ListA 更好的网络。 我们进一步展示更多的视觉化和分析, 以“ 打开黑盒”, 并发现搜索的顶级结构显示了高度一致和潜在的可转移模式。 我们希望我们的研究能够激发更多的思考和探索如何更好地学习前模型。