Detailed analysis of seizure semiology, the symptoms and signs which occur during a seizure, is critical for management of epilepsy patients. Inter-rater reliability using qualitative visual analysis is often poor for semiological features. Therefore, automatic and quantitative analysis of video-recorded seizures is needed for objective assessment. We present GESTURES, a novel architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to learn deep representations of arbitrarily long videos of epileptic seizures. We use a spatiotemporal CNN (STCNN) pre-trained on large human action recognition (HAR) datasets to extract features from short snippets (approx. 0.5 s) sampled from seizure videos. We then train an RNN to learn seizure-level representations from the sequence of features. We curated a dataset of seizure videos from 68 patients and evaluated GESTURES on its ability to classify seizures into focal onset seizures (FOSs) (N = 106) vs. focal to bilateral tonic-clonic seizures (TCSs) (N = 77), obtaining an accuracy of 98.9% using bidirectional long short-term memory (BLSTM) units. We demonstrate that an STCNN trained on a HAR dataset can be used in combination with an RNN to accurately represent arbitrarily long videos of seizures. GESTURES can provide accurate seizure classification by modeling sequences of semiologies.
翻译:对癫痫病人的管理至关重要:使用定性直观分析进行定性视觉分析的跨部可靠性往往缺乏,因此,需要为客观评估对录下来的缉获进行自动和定量分析,因此,需要为客观评估对录象的缉获进行客观分析;我们提出GESTURES,这是一个将神经神经系统网络和经常神经网络相结合的新结构,以了解对癫痫缉获量任意长视频的深刻描述;我们使用一个半流行性CNN(STCNN)预先培训的大规模人类行动识别(HAR)数据集,从短片(约0.5秒)提取特征;因此,我们培训一个RNNN,以学习从特征顺序学取的缉获水平表现;我们整理了68名病人的缉获视频数据集,并评价了GESTURES关于将缉获分类成重点发作样(FOS)模型(N=106)和双边精神结结关缉获(TCCNS=77)的焦点,然后用经过长期培训的RIS数据库显示98.9%的准确性缉获序列;我们可以用STSUR数据库提供长期使用的SIS数据序列,用SIS数据库来准确地展示长期记录。