Temporal action segmentation tags action labels for every frame in an input untrimmed video containing multiple actions in a sequence. For the task of temporal action segmentation, we propose an encoder-decoder-style architecture named C2F-TCN featuring a "coarse-to-fine" ensemble of decoder outputs. The C2F-TCN framework is enhanced with a novel model agnostic temporal feature augmentation strategy formed by the computationally inexpensive strategy of the stochastic max-pooling of segments. It produces more accurate and well-calibrated supervised results on three benchmark action segmentation datasets. We show that the architecture is flexible for both supervised and representation learning. In line with this, we present a novel unsupervised way to learn frame-wise representation from C2F-TCN. Our unsupervised learning approach hinges on the clustering capabilities of the input features and the formation of multi-resolution features from the decoder's implicit structure. Further, we provide the first semi-supervised temporal action segmentation results by merging representation learning with conventional supervised learning. Our semi-supervised learning scheme, called ``Iterative-Contrastive-Classify (ICC)'', progressively improves in performance with more labeled data. The ICC semi-supervised learning in C2F-TCN, with 40% labeled videos, performs similar to fully supervised counterparts.
翻译:C2F-TCN 框架通过一个新颖的模型、不可理喻的时间特征增强战略得到加强,这个模型是由分块随机最大集合结构的计算成本低廉的战略所形成的。它为三个基准行动部分数据集提供更准确、更有条理的监控结果。我们表明,该结构对监督和代表学习都具有灵活性。根据这一点,我们提出了一个新的、不受监督的方法,从 C2F-TCN 中学习基于框架的演示产出。我们未受监督的学习方法取决于输入特征的组合能力和从分块最大集合结构中生成的多分辨率特征。此外,我们通过将代表学习与常规监督的 C2C 升级的C-C 升级的升级学习计划相结合,与常规监督的C-C 升级的升级学习计划相结合,我们提供了第一个半监控的时间行动分解结果。