Models for learning probability distributions such as generative models and density estimators behave quite differently from models for learning functions. One example is found in the memorization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks (GANs). For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimension-independent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.
翻译:学习概率分布模型,如基因模型和密度估计符等,与学习功能模型的行为差异很大。一个例子是记忆化现象,即最终与经验分布趋同,这发生在基因对抗网络(GANs)中。因此,概括化问题比监督学习要微妙得多。关于偏差潜在模型,我们表明,如果采用早期停止,尽管从长远看,该模型要么对样本进行回忆,要么对差异进行回忆,那么,如果采用早期停止,独立尺寸一般化的准确性是可以实现的。