Action classification has made great progress, but segmenting and recognizing actions from long untrimmed videos remains a challenging problem. Most state-of-the-art methods focus on designing temporal convolution-based models, but the inflexibility of temporal convolutions and the difficulties in modeling long-term temporal dependencies restrict the potential of these models. Transformer-based models with adaptable and sequence modeling capabilities have recently been used in various tasks. However, the lack of inductive bias and the inefficiency of handling long video sequences limit the application of Transformer in action segmentation. In this paper, we design a pure Transformer-based model without temporal convolutions by incorporating temporal sampling, called Temporal U-Transformer (TUT). The U-Transformer architecture reduces complexity while introducing an inductive bias that adjacent frames are more likely to belong to the same class, but the introduction of coarse resolutions results in the misclassification of boundaries. We observe that the similarity distribution between a boundary frame and its neighboring frames depends on whether the boundary frame is the start or end of an action segment. Therefore, we further propose a boundary-aware loss based on the distribution of similarity scores between frames from attention modules to enhance the ability to recognize boundaries. Extensive experiments show the effectiveness of our model.
翻译:行动分类取得了巨大进展,但长期未剪辑的视频片段的分解和识别行动仍是一个棘手的问题。在本文中,大多数最先进的方法侧重于设计基于时间革命的模型,但时间变异的不灵活性以及长期时间依赖性建模方面的困难限制了这些模型的潜力。具有适应性和序列建模能力的基于变异器的模型最近被用于各种任务。然而,缺乏感知偏差和处理长视频序列的低效率限制了变异器在动作分割中的应用。在本文中,我们设计了一个纯粹的基于变异器的模型,而没有时间变异,采用了时间抽样,称为 " 时空U-变异 " (TUTUT)。 U-Transer 结构降低了复杂性,同时引入了一种感知偏差的偏差,即相邻的框架更有可能属于同一类别,但引入粗微的解析则导致边界分类错误。我们发现,边界框架及其相邻框之间的相似性分布取决于边界框架是行动段的起点还是终点。因此,我们进一步建议从时间抽样取样到相似的分差差差分数模型。