To make the earlier medical intervention of infants' cerebral palsy (CP), early diagnosis of brain damage is critical. Although general movements assessment(GMA) has shown promising results in early CP detection, it is laborious. Most existing works take videos as input to make fidgety movements(FMs) classification for the GMA automation. Those methods require a complete observation of videos and can not localize video frames containing normal FMs. Therefore we propose a novel approach named WO-GMA to perform FMs localization in the weakly supervised online setting. Infant body keypoints are first extracted as the inputs to WO-GMA. Then WO-GMA performs local spatio-temporal extraction followed by two network branches to generate pseudo clip labels and model online actions. With the clip-level pseudo labels, the action modeling branch learns to detect FMs in an online fashion. Experimental results on a dataset with 757 videos of different infants show that WO-GMA can get state-of-the-art video-level classification and cliplevel detection results. Moreover, only the first 20% duration of the video is needed to get classification results as good as fully observed, implying a significantly shortened FMs diagnosis time. Code is available at: https://github.com/scofiedluo/WO-GMA.
翻译:虽然一般运动评估(GMA)在早期检测CP时显示有可喜的结果,但这是很费力的。大多数现有作品都以视频作为投入,为GMA自动化进行折变运动(FMS)分类。这些方法要求对视频进行彻底观察,不能将含有正常调频的视频框架本地化。因此,我们提议采用名为WO-GMA的新颖方法,在监管不力的在线环境中进行调频定位。婴儿身体关键点首先作为WO-GMA的投入提取。然后WO-GMA进行本地时空提取,由两个网络分支跟踪制作假剪贴标签和示范在线行动。使用剪贴假标签,行动模型部门学习在线方式检测调频。由757个不同婴儿视频组成的数据集的实验结果显示WO-GMA可以得到最先进的视频级别分类和剪贴级检测结果。此外,只有前20%的MYA/MOFA时间段,才能在网上观测到完全的MFMA/CFO。需要完全的缩略式分析结果。