Incremental learning enables artificial agents to learn from sequential data. While important progress was made by exploiting deep neural networks, incremental learning remains very challenging. This is particularly the case when no memory of past data is allowed and catastrophic forgetting has a strong negative effect. We tackle class-incremental learning without memory by adapting prediction bias correction, a method which makes predictions of past and new classes more comparable. It was proposed when a memory is allowed and cannot be directly used without memory, since samples of past classes are required. We introduce a two-step learning process which allows the transfer of bias correction parameters between reference and target datasets. Bias correction is first optimized offline on reference datasets which have an associated validation memory. The obtained correction parameters are then transferred to target datasets, for which no memory is available. The second contribution is to introduce a finer modeling of bias correction by learning its parameters per incremental state instead of the usual past vs. new class modeling. The proposed dataset knowledge transfer is applicable to any incremental method which works without memory. We test its effectiveness by applying it to four existing methods. Evaluation with four target datasets and different configurations shows consistent improvement, with practically no computational and memory overhead.
翻译:递增学习使人工代理器能够从序列数据中学习。 虽然通过利用深层神经网络取得了重要进展, 增量学习仍然非常具有挑战性。 当不允许记忆过去的数据, 灾难性的遗忘会产生强烈的负面效应时, 特别是这样的情况。 我们通过调整预测偏差校正, 处理类增学习而没有记忆的学习。 预测偏差校正是使过去和新类预测更具有可比性的方法。 在允许记忆时, 提出对过去类的预测, 没有记忆则无法直接使用。 我们引入一个两步学习过程, 允许在参考和目标数据集之间转移偏差校正参数。 将比数校正首先优化参考数据集的离线, 并具有相关的校验内存。 然后, 获取的校正参数转移到目标数据集, 没有内存。 第二个贡献是引入一个精细的偏差校正模型, 通过学习每个增量状态的参数, 而不是通常的过去和新的类建模。 提议的数据集转换适用于任何不记忆的递增方法。 我们测试其有效性, 将它应用到四个现有方法。