Meta learning with multiple objectives can be formulated as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possible conflicting targets for the meta learner. However, existing studies either apply an inefficient evolutionary algorithm or linearly combine multiple objectives as a single-objective problem with the need to tune combination weights. In this paper, we propose a unified gradient-based Multi-Objective Meta Learning (MOML) framework and devise the first gradient-based optimization algorithm to solve the MOBLP by alternatively solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence properties of the proposed gradient-based optimization algorithm. Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.
翻译:具有多重目标的元学习可形成一个多目标双级优化问题(MOBLP)的多目标双级优化问题(MOBLP),即高层次子问题(MOBLP)是解决元学习者若干可能的相互冲突的目标。然而,现有的研究要么采用效率低下的进化算法,要么将多个目标作为单一目标问题与调和组合权重的需要进行线性结合。在本文件中,我们提出了一个统一的基于梯度的多目标元学习(MOML)框架,并设计了第一个基于梯度的优化算法来解决MOBLP,办法是通过梯度下降法和基于梯度的多目标优化方法分别解决低层次和上层子问题。理论上,我们证明拟议的基于梯度优化算法的趋同性。我们偶然地展示了拟议的MOL框架在若干元学习问题中的有效性,包括少发式学习、神经结构搜索、域适应和多任务学习。