What if deep neural networks can learn from sparsity-inducing priors? When the networks are designed by combining layer modules (CNN, RNN, etc), engineers less exploit the inductive bias, i.e., existing well-known rules or prior knowledge, other than annotated training data sets. We focus on employing sparsity-inducing priors in deep learning to encourage the network to concisely capture the nature of high-dimensional data in an unsupervised way. In order to use non-differentiable sparsity-inducing norms as loss functions, we plug their proximal mappings into the automatic differentiation framework. We demonstrate unsupervised learning of U-Net for background subtraction using low-rank and sparse priors. The U-Net can learn moving objects in a training sequence without any annotation, and successfully detect the foreground objects in test sequences.
翻译:如果深神经网络能够从广度诱导前科中学习呢?当这些网络通过合并层模块(CNN、RNN等)来设计时,工程师较少利用感应偏差,即现有众所周知的规则或先前知识,而不是附加说明的培训数据集。我们的重点是在深层学习中采用宽度诱导前科,鼓励网络以不受监督的方式简明地捕捉高维数据的性质。为了使用非差异的宽度诱导规范作为损失函数,我们将其近似映射插进自动区分框架。我们展示了使用低层次和稀疏的前科进行背景减值的无监督的U-Net学习。U-Net可以在不作任何批注的情况下学习训练序列中的移动对象,并在测试序列中成功探测到地面物体。