Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.
翻译:尽管最近通过端到端学习的方式取得了成功,但手工制作的光学流特征仍被广泛用于视频分析任务。为了填补这一空白,我们提议TVNet,这是一个全新的端到端可训练神经网络,从数据中学习光学流特征。TVNet从数据中分解一个特定的光学流解器,即TV-L1方法,通过将优化循环作为神经层来初始化。因此,TVNet可以在没有任何额外学习的情况下直接使用。此外,它可以自然地与其他特定任务网络结合,以形成一个端到端的结构,从而使我们的方法比目前的多阶段方法更有效,避免了预先计算和储存磁盘上特性的需要。最后,TVNet的参数可以通过端到端培训进一步调整。这使TVNet能够学习超出精确光学流的更丰富和具体任务模式。在两个行动识别基准上进行的广泛实验可以验证拟议方法的有效性。我们的TVNet比所有方法都更能理解,同时在时间上与最快的对应方具有竞争力。