Current state-of-the-art multi-objective optimization solvers, by computing gradients of all $m$ objective functions per iteration, produce after $k$ iterations a measure of proximity to critical conditions that is upper-bounded by $O(1/\sqrt{k})$ when the objective functions are assumed to have $L-$Lipschitz continuous gradients; i.e. they require $O(m/\epsilon^2)$ gradient and function computations to produce a measure of proximity to critical conditions bellow some target $\epsilon$. We reduce this to $O(1/\epsilon^2)$ with a method that requires only a constant number of gradient and function computations per iteration; and thus, we obtain for the first time a multi-objective descent-type method with a query complexity cost that is unaffected by increasing values of $m$. For this, a brand new multi-objective descent direction is identified, which we name the \emph{central descent direction}, and, an incremental approach is proposed. Robustness properties of the central descent direction are established, measures of proximity to critical conditions are derived, and, the incremental strategy for finding solutions to the multi-objective problem is shown to attain convergence properties unattained by previous methods. To the best of our knowledge, this is the first method to achieve this with no additional a-priori information on the structure of the problem, such as done by scalarizing techniques, and, with no pre-known information on the regularity of the objective functions other than Lipschitz continuity of the gradients.
翻译:当前最先进的多目标优化解决方案,通过计算每迭代所有美元客观函数的梯度,计算出每迭代所有美元客观函数的梯度,在计算出每迭代所有美元客观函数的梯度后,产生一种与关键条件接近的量度值,而当目标函数假定具有L-$Lipschitz连续梯度时,该值以O(1/ sqrt{k}美元为上限;即它们需要$O(m/\epsilon2)美元为基值,而计算出与关键条件的接近度值,则在重覆代代代计算中,我们将这一值减为O(1/\eepsilon_2)美元,用一种只需要固定的梯度和函数计算每迭代计算一次的数值;因此,我们第一次获得一种多目标的后继类型方法,其复杂性不受美元价值增加的影响。为此确定了一个新的多目标下行方向,我们命名了一个不易辨别的位方向, 并提出了一个渐进的方法。