A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we learn an optimal model in training data, it could have better generalization performance in testing tasks. However, learning such a model is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel robust meta-learning method, which is more robust to the image-based testing tasks which is unknown and has distribution shifts with training tasks. Our robust meta-learning method can provide robust optimal models even when data from each distribution are scarce. In experiments, we demonstrate that our algorithm not only has better generalization performance but also robust to different unknown testing tasks.
翻译:简单化的机器学习模式应该能够从隐蔽的测试实例中获取低误差。 因此,如果我们在培训数据中学习一个最佳模式,它就可以在测试任务中取得更好的通用性能。 但是,在标准机器学习框架中,由于测试数据分布不明,不可能在标准机器学习框架中学习这样的模式。 为了应对这一挑战,我们建议了一种新型强健的元学习方法,它对于未知的基于图像的测试任务更为有力,并且随着培训任务而转移。我们强健的元学习方法可以提供强健的最佳模式,即使每个分布的数据都很少。在实验中,我们证明我们的算法不仅具有更好的通用性,而且对不同的未知的测试任务也非常强大。