The impressive generalization performance of modern neural networks is attributed in part to their ability to implicitly memorize complex training patterns. Inspired by this, we explore a novel mechanism to improve model generalization via explicit memorization. Specifically, we propose the residual-memorization (ResMem) algorithm, a new method that augments an existing prediction model (e.g. a neural network) by fitting the model's residuals with a $k$-nearest neighbor based regressor. The final prediction is then the sum of the original model and the fitted residual regressor. By construction, ResMem can explicitly memorize the training labels. Empirically, we show that ResMem consistently improves the test set generalization of the original prediction model across various standard vision and natural language processing benchmarks. Theoretically, we formulate a stylized linear regression problem and rigorously show that ResMem results in a more favorable test risk over the base predictor.
翻译:现代神经网络令人印象深刻的普及性表现部分归功于它们隐含地将复杂的培训模式记住的能力。 受此启发,我们探索了一种新机制,通过明确的记忆化来改进模型的概括化。 具体地说,我们提议了残余记忆(ResMem)算法(ResMem)算法,这是一种新方法,通过将模型的残留物与以美元为最近邻的邻居基底反射器相匹配来增强现有的预测模型(例如神经网络),最后的预测是原始模型和合适的残余反射器的总和。 通过构建,ResMem可以明确地将培训标签记忆化。 我们同时显示ResMem(ResMem)不断改进原始预测模型的测试集,超越了各种标准愿景和自然语言处理基准。 从理论上讲,我们形成了一个典型的线性回归问题,并严格显示ResMem(ResMem)对基础预测器产生更有利的测试风险。