Multi-objective optimization (MOO) is a prevalent challenge for Deep Learning, however, there exists no scalable MOO solution for truly deep neural networks. Prior work either demand optimizing a new network for every point on the Pareto front, or induce a large overhead to the number of trainable parameters by using hyper-networks conditioned on modifiable preferences. In this paper, we propose to condition the network directly on these preferences by augmenting them to the feature space. Furthermore, we ensure a well-spread Pareto front by penalizing the solutions to maintain a small angle to the preference vector. In a series of experiments, we demonstrate that our Pareto fronts achieve state-of-the-art quality despite being computed significantly faster. Furthermore, we showcase the scalability as our method approximates the full Pareto front on the CelebA dataset with an EfficientNet network at a tiny training time overhead of 7% compared to a simple single-objective optimization. We make our code publicly available at https://github.com/ruchtem/cosmos.
翻译:多目标优化(MOO)是深思熟虑的一个普遍挑战,然而,对于真正深层的神经网络来说,不存在可扩展的MOO解决方案。先前的工作要么要求优化帕雷托前线每个点的新网络,要么通过使用以可变偏好为条件的超网络,使可训练参数的数量产生巨大的间接费用。在本文中,我们提议将网络的偏好直接附加在这些偏好上。此外,我们通过惩罚维持偏爱矢量的小角度的解决方案,确保Pareto前面的广度。在一系列的实验中,我们证明我们的Pareto前线尽管计算速度快得多,但仍达到了最先进的质量。此外,我们展示了这种可扩展性,因为我们的方法接近了CelebA数据集的全Pareto前端,其有效网络管理比简单的单一目标优化要低7%。我们在https://github.com/ruchtem/cosmocraftal上公布了我们的代码。