Due to the model aging problem, Deep Neural Networks (DNNs) need updates to adjust them to new data distributions. The common practice leverages incremental learning (IL), e.g., Class-based Incremental Learning (CIL) that updates output labels, to update the model with new data and a limited number of old data. This avoids heavyweight training (from scratch) using conventional methods and saves storage space by reducing the number of old data to store. But it also leads to poor performance in fairness. In this paper, we show that CIL suffers both dataset and algorithm bias problems, and existing solutions can only partially solve the problem. We propose a novel framework, CILIATE, that fixes both dataset and algorithm bias in CIL. It features a novel differential analysis guided dataset and training refinement process that identifies unique and important samples overlooked by existing CIL and enforces the model to learn from them. Through this process, CILIATE improves the fairness of CIL by 17.03%, 22.46%, and 31.79% compared to state-of-the-art methods, iCaRL, BiC, and WA, respectively, based on our evaluation on three popular datasets and widely used ResNet models.
翻译:由于模型老化问题,深度神经网络(DNN)需要更新以适应新的数据分布。常见做法是采用基于类别的增量学习(CIL),如更新输出标签,以使用新数据和有限数量的旧数据更新模型。这避免了使用传统方法进行庞大的训练(从头开始)并通过减少旧数据的数量来节省存储空间。但是,它也导致公平性表现较差。本文展示了CIL存在数据集和算法偏差问题,现有的解决方法只能部分解决问题。我们提出了一种新颖的框架CILIATE,用于修复CIL中的数据集和算法偏差。它采用了一种新颖的差分分析引导的数据集和训练细化过程,该过程可以识别现有CIL忽略的独特和重要的样本,并强制模型从中学习。通过这个过程,CILIATE相对于最先进的方法iCaRL、BiC和WA,基于我们在三个流行数据集和广泛使用的ResNet模型上的评估,提高了CIL的公平性分别为17.03%、22.46%和31.79%。