Obtaining per-beat information is a key task in the analysis of cardiac electrocardiograms (ECG), as many downstream diagnosis tasks are dependent on ECG-based measurements. Those measurements, however, are costly to produce, especially in recordings that change throughout long periods of time. However, existing annotated databases for ECG delineation are small, being insufficient in size and in the array of pathological conditions they represent. This article delves has two main contributions. First, a pseudo-synthetic data generation algorithm was developed, based in probabilistically composing ECG traces given "pools" of fundamental segments, as cropped from the original databases, and a set of rules for their arrangement into coherent synthetic traces. The generation of conditions is controlled by imposing expert knowledge on the generated trace, which increases the input variability for training the model. Second, two novel segmentation-based loss functions have been developed, which attempt at enforcing the prediction of an exact number of independent structures and at producing closer segmentation boundaries by focusing on a reduced number of samples. The best performing model obtained an $F_1$-score of 99.38\% and a delineation error of $2.19 \pm 17.73$ ms and $4.45 \pm 18.32$ ms for all wave's fiducials (onsets and offsets, respectively), as averaged across the P, QRS and T waves for three distinct freely available databases. The excellent results were obtained despite the heterogeneous characteristics of the tested databases, in terms of lead configurations (Holter, 12-lead), sampling frequencies ($250$, $500$ and $2,000$ Hz) and represented pathophysiologies (e.g., different types of arrhythmias, sinus rhythm with structural heart disease), hinting at its generalization capabilities, while outperforming current state-of-the-art delineation approaches.
翻译:获取每盘心电图(ECG)是分析心脏心电图(ECG)的关键任务,因为许多下游诊断任务取决于ECG的测量。但是,进行这些测量的成本很高,特别是在长期变化的录音中。然而,现有的ECG附加说明的划界数据库规模小,在大小和病理条件方面都不够充分。这一文章三角图有两个主要贡献。首先,开发了假合成数据生成算法,其基础是概率性地将ECG的“基本部分集合”的痕迹从原始数据库中提取出来,并有一套规则将其安排成连贯的合成痕迹。但是,这些测量方法的成本昂贵,特别是长期变化的记录。第二,开发了两个新的基于分类的损失函数,试图对独立结构的准确数量进行预测,并通过对样品数量进行更接近的分解。 最佳运行模型在99.38美元的基本部分中获得了1美元的基本部分的“集合”的“集合”,并有一套规则将其安排安排安排成连贯的合成记录。 生成条件通过专家对所生成的跟踪的追踪,增加了模型的输入了模型的输入的数值为每盘数(hyral ral ral) ral ralalalalalalal 和每平平平的数值, ralalalalalalalalal 。 。 ralalalalalalalalalald 。