Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data. Most efforts are devoted to advancing optimization objectives that regularize models to capture the underlying invariance; however, there often are compromises in the optimization process of these OOD objectives: i) Many OOD objectives have to be relaxed as penalty terms of Empirical Risk Minimization (ERM) for the ease of optimization, while the relaxed forms can weaken the robustness of the original objective; ii) The penalty terms also require careful tuning of the penalty weights due to the intrinsic conflicts between ERM and OOD objectives. Consequently, these compromises could easily lead to suboptimal performance of either the ERM or OOD objective. To address these issues, we introduce a multi-objective optimization (MOO) perspective to understand the OOD optimization process, and propose a new optimization scheme called PAreto Invariant Risk Minimization (PAIR). PAIR improves the robustness of OOD objectives by cooperatively optimizing with other OOD objectives, thereby bridging the gaps caused by the relaxations. Then PAIR approaches a Pareto optimal solution that trades off the ERM and OOD objectives properly. Extensive experiments on challenging benchmarks, WILDS, show that PAIR alleviates the compromises and yields top OOD performances.
翻译:最近,人们日益关注使机器学习系统能够很好地向外分发(OOD)数据推广,使推广工作能够很好地推广到分布(OOOD)数据。多数努力都致力于推进优化目标,使模型正规化,以捕捉潜在的差异;然而,OOOD目标的优化进程往往存在妥协:(一) 许多OOD目标必须放松,因为为了便于优化,《最大限度地减少风险(ERM)》的处罚条款必须放宽,而放松的形式可以削弱最初目标的稳健性能;(二) 刑罚条款还需要仔细调整因机构风险管理目标与OOOD目标之间内在冲突而造成的处罚权重。因此,这些妥协很容易导致机构风险管理或OOOD目标的不尽善业绩。为了解决这些问题,我们引入了一个多目标优化(MOOOO)的观点,以理解OOD优化过程,并提出一个新的优化计划,即PARto 消减风险(PAIR),通过与其他ODOD目标合作优化,从而弥补因放松而导致的差距。随后,PAIR将最佳的实验结果展示最佳的升级。</s>