Motor behaviour analysis is essential to biomedical research and clinical diagnostics as it provides a non-invasive strategy for identifying motor impairment and its change caused by interventions. State-of-the-art instrumented movement analysis is time- and cost-intensive, since it requires placing physical or virtual markers. Besides the effort required for marking keypoints or annotations necessary for training or finetuning a detector, users need to know the interesting behaviour beforehand to provide meaningful keypoints. We introduce unsupervised behaviour analysis and magnification (uBAM), an automatic deep learning algorithm for analysing behaviour by discovering and magnifying deviations. A central aspect is unsupervised learning of posture and behaviour representations to enable an objective comparison of movement. Besides discovering and quantifying deviations in behaviour, we also propose a generative model for visually magnifying subtle behaviour differences directly in a video without requiring a detour via keypoints or annotations. Essential for this magnification of deviations even across different individuals is a disentangling of appearance and behaviour. Evaluations on rodents and human patients with neurological diseases demonstrate the wide applicability of our approach. Moreover, combining optogenetic stimulation with our unsupervised behaviour analysis shows its suitability as a non-invasive diagnostic tool correlating function to brain plasticity.
翻译:机动行为分析对于生物医学研究和临床诊断至关重要,因为它提供了一种非侵入性战略,用以查明运动缺陷及其因干预造成的变化。 最新仪器化的移动分析是时间和成本密集的,因为它需要设置物理或虚拟标记。除了需要为培训或微调探测器所需的关键点或说明标记必要的关键点或说明外,用户还需要事先了解令人感兴趣的行为,以提供有意义的关键点。我们引入不受监督的行为分析和放大(uBAM),这是通过发现和放大偏差分析行为的自动深度学习算法。一个核心方面是不受监督地学习态势和行为表现,以便能够客观比较运动。除了发现和量化行为偏差外,我们还提议一个基因化模型,直接在视频中显示微妙的行为差异,而无需通过关键点或说明进行偏差。即使是在不同个人之间扩大偏差至关重要的是外观和行为变异性。对鼠标和患有神经疾病的人类病人的评价显示了我们的方法的不广泛适用性。一个中心方面是,即对姿态和行为表现方式进行不受监督的学习,以便客观比较。除了发现和量化的行为偏差性分析外,我们还将大脑诊断功能与不相性分析。