The success of deep learning models has led to their adaptation and adoption by prominent video understanding methods. The majority of these approaches encode features in a joint space-time modality for which the inner workings and learned representations are difficult to visually interpret. We propose LEArned Preconscious Synthesis (LEAPS), an architecture-agnostic method for synthesizing videos from the internal spatiotemporal representations of models. Using a stimulus video and a target class, we prime a fixed space-time model and iteratively optimize a video initialized with random noise. We incorporate additional regularizers to improve the feature diversity of the synthesized videos as well as the cross-frame temporal coherence of motions. We quantitatively and qualitatively evaluate the applicability of LEAPS by inverting a range of spatiotemporal convolutional and attention-based architectures trained on Kinetics-400, which to the best of our knowledge has not been previously accomplished.
翻译:深度学习模型的成功导致了其被突出的视频理解方法所适应和采用。其中大多数方法将特征编码为联合的时空模态,其内部工作和学习表示很难进行视觉解释。我们提出一种名为“LEAPS”的学习前意识合成的方法,用于从模型的内部时空表示中合成视频。我们使用刺激视频和一个目标类来预先调整一个固定的时空模型,并通过迭代优化一个由随机噪声初始化的视频来实现。我们加入额外的正则化器来改善合成视频的特征多样性以及运动的帧间时间一致性。我们通过逆转一系列在Kinetics-400上训练的时空卷积和注意力机制的架构,定量和定性地评估了LEAPS的适用性,这在我们所知道的范围内以前尚未完成。