Multi-objective optimization (MOO) aims at finding a set of optimal configurations for a given set of objectives. A recent line of work applies MOO methods to the typical Machine Learning (ML) setting, which becomes multi-objective if a model should optimize more than one objective, for instance in fair machine learning. These works also use Multi-Task Learning (MTL) problems to benchmark MOO algorithms treating each task as independent objective. In this work we show that MTL problems do not resemble the characteristics of MOO problems. In particular, MTL losses are not competing in case of a sufficiently expressive single model. As a consequence, a single model can perform just as well as optimizing all objectives with independent models, rendering MOO inapplicable. We provide evidence with extensive experiments on the widely used Multi-Fashion-MNIST datasets. Our results call for new benchmarks to evaluate MOO algorithms for ML. Our code is available at: https://github.com/ruchtem/moo-mtl.
翻译:多目标优化(MOO)旨在为特定一组目标找到一套最佳配置。最近的工作方针将MOO方法应用于典型的机器学习(ML)设置,如果模型优化不止一个目标,例如公平机器学习,这种设置就成为多目标。这些工程还利用多任务学习(MTL)问题来为MOO的算法制定基准,将每项任务视为独立目标。在这项工作中,我们表明MTL问题与MOO问题的特点并不相似。特别是,MTL损失在一个足够明确的单一模型中并不相互竞争。因此,一个单一模型既能以独立模型实现所有目标,又能优化所有目标,使MOO无法适用。我们在广泛使用的多任务-马希恩-MNIST数据集上提供了大量实验的证据。我们的结果要求制定新的基准来评估ML的MOO算法。我们的代码可以在https://github.com/ruchtem/moo-mtl上查阅。