Video action segmentation and recognition tasks have been widely applied in many fields. Most previous studies employ large-scale, high computational visual models to understand videos comprehensively. However, few studies directly employ the graph model to reason about the video. The graph model provides the benefits of fewer parameters, low computational cost, a large receptive field, and flexible neighborhood message aggregation. In this paper, we present a graph-based method named Semantic2Graph, to turn the video action segmentation and recognition problem into node classification of graphs. To preserve fine-grained relations in videos, we construct the graph structure of videos at the frame-level and design three types of edges: temporal, semantic, and self-loop. We combine visual, structural, and semantic features as node attributes. Semantic edges are used to model long-term spatio-temporal relations, while the semantic features are the embedding of the label-text based on the textual prompt. A Graph Neural Networks (GNNs) model is used to learn multi-modal feature fusion. Experimental results show that Semantic2Graph achieves improvement on GTEA and 50Salads, compared to the state-of-the-art results. Multiple ablation experiments further confirm the effectiveness of semantic features in improving model performance, and semantic edges enable Semantic2Graph to capture long-term dependencies at a low cost.
翻译:视频动作分割和识别任务已在许多领域广泛应用。 以往的研究大多采用大规模高计算式高视觉模型来全面理解视频。 然而, 很少有研究直接使用图形模型来解释视频。 图形模型提供了较少参数、 低计算成本、 大型可接收字段和灵活的周边信息聚合的好处。 在本文中, 我们展示了一个以图形为基础的方法, 名为 Semantic2 Gragph, 将视频动作分割和识别问题转化为图形的节点分类。 为了维护视频中的细度、 高级可视模型关系, 我们在框架级别上构建视频图表结构结构结构结构, 并设计三种边缘: 时间、 语义和自loop 。 我们将视觉、 结构、 和 语义特性合并作为节点属性。 使用语义边缘来模拟长期的地域- 时间- 时间关系, 而语义特征是基于文本快速的标签文本嵌入 。 图表- 低轨迹网络( GNNNSs) 模型用于学习多模式性格的图像学结构连接结构结构结构结构结构结构结构结构结构。 、 实验结果结果结果显示Gman2 长期性性结果在改善G- 和图像- 长期性结果上, 和图像- slimtial- prialalalalalalal- sal- sal- salalalalalalalalalalal- sal- salisalisalisal real realisal real realisalismalismal real supal realupaldaldalupalupaldal destr