Growing amount of different practical tasks in a video understanding problem has addressed the great challenge aiming to design an universal solution, which should be available for broad masses and suitable for the demanding edge-oriented inference. In this paper we are focused on designing a network architecture and a training pipeline to tackle the mentioned challenges. Our architecture takes the best from the previous ones and brings the ability to be successful not only in appearance-based action recognition tasks but in motion-based problems too. Furthermore, the induced label noise problem is formulated and Adaptive Clip Selection (ACS) framework is proposed to deal with it. Together it makes the LIGAR framework the general-purpose action recognition solution. We also have reported the extensive analysis on the general and gesture datasets to show the excellent trade-off between the performance and the accuracy in comparison to the state-of-the-art solutions. Training code is available at: https://github.com/openvinotoolkit/training_extensions. For the efficient edge-oriented inference all trained models can be exported into the OpenVINO format.
翻译:视频理解问题中越来越多的不同实际任务,解决了设计通用解决方案的巨大挑战,这种解决方案应当面向广大大众,适合要求的边缘推论。本文侧重于设计一个网络架构和培训管道,以应对上述挑战。我们的架构从以往的架构中取而代之,不仅在基于外观的行动识别任务中,而且在动态问题中也带来成功的能力。此外,还提出了诱发标签噪音问题,并提议了适应性剪贴选择框架来处理这一问题。它共同将LIGAR框架作为通用目的行动识别解决方案。我们还报告了对通用和手势数据集的广泛分析,以显示与最新解决方案相比,业绩和准确性之间的极佳权衡。培训代码见:https://github.com/openvintoolkit/training_extension。所有经过培训的高效边缘选择模式都可以输出到 OpenVINO格式。