Pareto Front Learning (PFL) was recently introduced as an effective approach to obtain a mapping function from a given trade-off vector to a solution on the Pareto front, which solves the multi-objective optimization (MOO) problem. Due to the inherent trade-off between conflicting objectives, PFL offers a flexible approach in many scenarios in which the decision makers can not specify the preference of one Pareto solution over another, and must switch between them depending on the situation. However, existing PFL methods ignore the relationship between the solutions during the optimization process, which hinders the quality of the obtained front. To overcome this issue, we propose a novel PFL framework namely PHN-HVI, which employs a hypernetwork to generate multiple solutions from a set of diverse trade-off preferences and enhance the quality of the Pareto front by maximizing the Hypervolume indicator defined by these solutions. The experimental results on several MOO machine learning tasks show that the proposed framework significantly outperforms the baselines in producing the trade-off Pareto front.
翻译:Pareto Pareto Front Learning(PFL)最近作为一种有效的方法被引入,目的是从一个特定的取舍矢量到解决Pareto的解决方案,解决多目标优化(MOO)问题。由于相互冲突的目标之间的内在权衡,PFL在许多情况中提供了一种灵活的办法,即决策者不能具体确定一个Pareto解决方案优于另一个解决方案的偏好,而必须视具体情况在他们之间转换。然而,现有的PFL方法忽视了优化过程中解决方案之间的关系,这妨碍了获得的前沿的质量。为了克服这一问题,我们提议了一个新的PFL框架,即PHN-HVI,即PHN-HVI,它利用一个超级网络,从一套不同的取舍优惠中产生多种解决方案,并通过最大限度地增加这些解决方案所确定的超量指标来提高Pareto前沿的质量。几个MOO机器学习任务的实验结果表明,拟议的框架大大超越了产生贸易-Pareto Front的基线。</s>