There is limited understanding of the information captured by deep spatiotemporal models in their intermediate representations. For example, while evidence suggests that action recognition algorithms are heavily influenced by visual appearance in single frames, no quantitative methodology exists for evaluating such static bias in the latent representation compared to bias toward dynamics. We tackle this challenge by proposing an approach for quantifying the static and dynamic biases of any spatiotemporal model, and apply our approach to three tasks, action recognition, automatic video object segmentation (AVOS) and video instance segmentation (VIS). Our key findings are: (i) Most examined models are biased toward static information. (ii) Some datasets that are assumed to be biased toward dynamics are actually biased toward static information. (iii) Individual channels in an architecture can be biased toward static, dynamic or a combination of the two. (iv) Most models converge to their culminating biases in the first half of training. We then explore how these biases affect performance on dynamically biased datasets. For action recognition, we propose StaticDropout, a semantically guided dropout that debiases a model from static information toward dynamics. For AVOS, we design a better combination of fusion and cross connection layers compared with previous architectures.
翻译:例如,虽然有证据表明,行动识别算法受到单一框架的视觉外观的严重影响,但并没有定量方法来评价潜在代表性中的这种静态偏差,而不是对动态的偏向。 我们通过提出一种方法来量化任何空间时观模型的静态和动态偏差来应对这一挑战,并将我们的方法应用于三种任务:行动识别、自动视频天体分割和视频实例分割。 我们的主要结论是:(一) 多数经审查的模型偏向静态信息。 (二) 一些假设偏向动态的数据集实际上偏向静态信息。 (三) 一个结构中个别的频道可能偏向静态、动态或两者的组合。 (四) 多数模型在培训的前半阶段趋于接近于其最终偏差。 然后我们探索这些偏差如何影响动态偏差数据集的性能。为了行动识别,我们建议StatiDropout, 一种以自静态信息向动态模式的静态引导性退出。 (三) 一个结构中,我们设计了一个更好的组合。