Multi-objective gradient methods are becoming the standard for solving multi-objective problems. Among others, they show promising results in developing multi-objective recommender systems with both correlated and conflicting objectives. Classic multi-gradient descent usually relies on the combination of the gradients, not including the computation of first and second moments of the gradients. This leads to a brittle behavior and misses important areas in the solution space. In this work, we create a multi-objective model-agnostic Adamize method that leverages the benefits of the Adam optimizer in single-objective problems. This corrects and stabilizes the gradients of every objective before calculating a common gradient descent vector that optimizes all the objectives simultaneously. We evaluate the benefits of multi-objective Adamize on two multi-objective recommender systems and for three different objective combinations, both correlated or conflicting. We report significant improvements, measured with three different Pareto front metrics: hypervolume, coverage, and spacing. Finally, we show that the Adamized Pareto front strictly dominates the previous one on multiple objective pairs.
翻译:多目标梯度方法正在成为解决多目标问题的标准。 除其他外, 这些方法在开发多目标建议系统, 以及具有相关和相冲突目标的多目标性建议系统方面, 显示了有希望的结果。 经典多梯度下降通常依赖于梯度的组合, 而不包括梯度第一和第二时刻的计算。 这导致一种扭曲的行为, 并且错过了解决方案空间中的重要领域 。 在这项工作中, 我们创建了一个多目标模型级的阿扎米化方法, 在单一目标问题中利用亚当优化器的好处。 这纠正并稳定了每个目标的梯度, 在计算一个同时优化所有目标的共同梯度下降矢量之前。 我们评估了两个多目标阿扎米化系统以及三个不同目标组合的效益, 两者是相关或相互冲突的。 我们报告了显著的改进, 以三种不同的帕雷托前度衡量尺度来衡量: 超容量、 覆盖范围和 间距 。 最后, 我们显示阿扎齐的帕雷托前方在多个目标组合上严格控制前一个。