Incrementally training deep neural networks to recognize new classes is a challenging problem. Most existing class-incremental learning methods store data or use generative replay, both of which have drawbacks, while 'rehearsal-free' alternatives such as parameter regularization or bias-correction methods do not consistently achieve high performance. Here, we put forward a new strategy for class-incremental learning: generative classification. Rather than directly learning the conditional distribution p(y|x), our proposal is to learn the joint distribution p(x,y), factorized as p(x|y)p(y), and to perform classification using Bayes' rule. As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned and by using importance sampling to estimate the likelihoods p(x|y). This simple approach performs very well on a diverse set of continual learning benchmarks, outperforming generative replay and other existing baselines that do not store data.
翻译:逐步培训深层神经网络以识别新班级是一个棘手的问题。 大部分现有的班级强化学习方法储存数据或使用基因回放方法, 两者都有缺点, 而“ 无排练”的替代方法, 如参数规范化或偏向校正方法, 并没有始终取得高性能。 在这里, 我们提出了一个新的班级强化学习战略: 基因分类。 我们的建议不是直接学习有条件的分布 p( y ⁇ xxxxxx), 而是学习共同分布 p( x ⁇ y)p(y), 以 p( x ⁇ y)p(y) (y) 为因数, 并使用贝斯规则进行分类。 作为原则的证明, 我们在这里实施这一战略的方法是培训每个班级的变式自动编码器, 并使用重要取样来估计可能性 p( ⁇ xy) 。 这种简单的方法在一系列不同的连续学习基准、 优于业绩的基因重现和其他不存储数据的基线上表现很好。