We expect the generalization error to improve with more samples from a similar task, and to deteriorate with more samples from an out-of-distribution (OOD) task. In this work, we show a counter-intuitive phenomenon: the generalization error of a task can be a non-monotonic function of the number of OOD samples. As the number of OOD samples increases, the generalization error on the target task improves before deteriorating beyond a threshold. In other words, there is value in training on small amounts of OOD data. We use Fisher's Linear Discriminant on synthetic datasets and deep networks on computer vision benchmarks such as MNIST, CIFAR-10, CINIC-10, PACS and DomainNet to demonstrate and analyze this phenomenon. In the idealistic setting where we know which samples are OOD, we show that these non-monotonic trends can be exploited using an appropriately weighted objective of the target and OOD empirical risk. While its practical utility is limited, this does suggest that if we can detect OOD samples, then there may be ways to benefit from them. When we do not know which samples are OOD, we show how a number of go-to strategies such as data-augmentation, hyper-parameter optimization, and pre-training are not enough to ensure that the target generalization error does not deteriorate with the number of OOD samples in the dataset.
翻译:我们期望通过类似任务的更多样本来改进一般化错误,并随着分配(OOOD)任务的更多样本而恶化。在这项工作中,我们展示了一个反直觉现象:任务的一般化错误可能是OOD样品数量的非分子功能。随着OOD样品数量的增加,目标任务的一般化错误在恶化超过临界值之前会有所改进。换句话说,培训少量OOD数据是有价值的。我们使用Fisher的线性分辨数据集和计算机视觉基准(如MNIST、CIFAR-10、CINIC-10、PACS和DomainNet)的深度网络来展示和分析这种现象。在我们知道哪些样品是OOD的理想化环境中,我们表明这些非分子趋势可以用目标的适当加权目标和OOD经验风险来加以利用。尽管其实际用途有限,但这表明,如果我们能够探测OOOD样品,那么就有可能有办法从这些样品中获益。当我们不知道哪些样品是OODD的精确度时,我们如何确保这些样品是多少的升级前,我们如何使这种样品成为了这种样品。我们作为OOODD的精确度的精确度。