Temporal action segmentation (TAS) is a critical step toward long-term video understanding. Recent studies follow a pattern that builds models based on features instead of raw video picture information. However, we claim those models are trained complicatedly and limit application scenarios. It is hard for them to segment human actions of video in real time because they must work after the full video features are extracted. As the real-time action segmentation task is different from TAS task, we define it as streaming video real-time temporal action segmentation (SVTAS) task. In this paper, we propose a real-time end-to-end multi-modality model for SVTAS task. More specifically, under the circumstances that we cannot get any future information, we segment the current human action of streaming video chunk in real time. Furthermore, the model we propose combines the last steaming video chunk feature extracted by language model with the current image feature extracted by image model to improve the quantity of real-time temporal action segmentation. To the best of our knowledge, it is the first multi-modality real-time temporal action segmentation model. Under the same evaluation criteria as full video temporal action segmentation, our model segments human action in real time with less than 40% of state-of-the-art model computation and achieves 90% of the accuracy of the full video state-of-the-art model.
翻译:时间行动分解( TAS) 是长期视频理解的关键步骤 。 最近的研究遵循一种模式, 建立基于特征而非原始视频图片信息的模型 。 然而, 我们声称这些模型是经过复杂训练的, 并限制应用情景 。 它们很难将视频中的人类行动分解为实时, 因为他们必须在全部视频特征被提取后才能工作 。 由于实时行动分解任务不同于 TAS 任务, 我们把它定义为流视频实时时间行动分解( SVTAS) 任务 。 在本文中, 我们为 SVTAS 任务提出了一个实时端到端多式模型模型。 更具体地说, 在无法获取任何未来信息的情况下, 我们很难将当前人类的视频流块动作分解为实时。 此外, 我们提议的模型将语言模型所抽取的最后一个蒸汽视频块与通过图像模型所提取的当前图像特征结合起来, 以提高实时时间行动分解的数量 。 根据我们的知识, 这是第一个多模式的实时至端多模式多模式多模式多模式的多模式行动模式模式模式。 更具体地说, 我们的40级的视频分解的模型模型模型模型模型模型模型在实时的模型中实现了90段的模型中,, 我们的全部段段段的模型的模型的计算,,, 的模型的模型的全段计算是整个段段段段段段段次的计算, 。