Standard deep learning-based classification approaches require collecting all samples from all classes in advance and are trained offline. This paradigm may not be practical in real-world clinical applications, where new classes are incrementally introduced through the addition of new data. Class incremental learning is a strategy allowing learning from such data. However, a major challenge is catastrophic forgetting, i.e., performance degradation on previous classes when adapting a trained model to new data. Prior methodologies to alleviate this challenge save a portion of training data require perpetual storage of such data that may introduce privacy issues. Here, we propose a novel data-free class incremental learning framework that first synthesizes data from the model trained on previous classes to generate a \ours. Subsequently, it updates the model by combining the synthesized data with new class data. Furthermore, we incorporate a cosine normalized Cross-entropy loss to mitigate the adverse effects of the imbalance, a margin loss to increase separation among previous classes and new ones, and an intra-domain contrastive loss to generalize the model trained on the synthesized data to real data. We compare our proposed framework with state-of-the-art methods in class incremental learning, where we demonstrate improvement in accuracy for the classification of 11,062 echocardiography cine series of patients.
翻译:标准的深层次学习分类方法要求事先从所有各年级收集所有样本,并进行离线培训。这种模式在现实世界临床应用中可能不切实际,因为通过增加新数据逐渐引入新类别。 类递增学习是一种允许从这些数据中学习的战略。 但是,一个重大挑战是灾难性的忘记,即:在根据新数据调整经过培训的模式时,前几类的性能退化; 先前为缓解这一挑战而采用的方法是:保存一部分培训数据,这可能需要长期储存这种可能引入隐私问题的数据。在这里,我们提议了一个新的无数据类递增学习框架,首先将以前班受过培训的模型中的数据综合起来,以产生一个\ ours。随后,它通过将综合数据与新的类数据相结合来更新模型。 此外,我们纳入了一种共生共生的跨热带损失,以减轻不平衡的不利影响;增加前几类和新类之间的分化差差,以及一种内部的对比性损失,以便把经过综合数据培训的模型与真实数据加以概括。我们将我们提议的框架与11级病人增量学习的状态方法加以比较,我们在11级分析中显示精确性能。