Recent advances in deep generative models have led to impressive results in a variety of application domains. Motivated by the possibility that deep learning models might memorize part of the input data, there have been increased efforts to understand how memorization arises. In this work, we extend a recently proposed measure of memorization for supervised learning (Feldman, 2019) to the unsupervised density estimation problem and adapt it to be more computationally efficient. Next, we present a study that demonstrates how memorization can occur in probabilistic deep generative models such as variational autoencoders. This reveals that the form of memorization to which these models are susceptible differs fundamentally from mode collapse and overfitting. Furthermore, we show that the proposed memorization score measures a phenomenon that is not captured by commonly-used nearest neighbor tests. Finally, we discuss several strategies that can be used to limit memorization in practice. Our work thus provides a framework for understanding problematic memorization in probabilistic generative models.
翻译:深层基因模型的最近进步导致在各种应用领域取得了令人印象深刻的成果。 深深学习模型有可能将部分输入数据混为一文,因此,人们更加努力了解如何进行记忆化。 在这项工作中,我们将最近提出的用于监督学习的记忆化措施(Feldman, 2019年)推广到未受监督的密度估计问题,使其适应于更高的计算效率。接下来,我们提出一份研究报告,说明如何在诸如变异性自动编码器等概率性深层基因模型中实现记忆化。这揭示了这些模型容易被忽略的记忆化形式与模式崩溃和过度配配装截然不同。此外,我们表明拟议的记忆化评分衡量一种不为常用的近邻测试所捕捉的现象。最后,我们讨论了一些可以用来限制实际中记忆化的战略。我们的工作因此提供了一个框架,用以理解在可比较性基因化模型中存在问题的记忆化问题。