Although traditional optimization methods focus on finding a single optimal solution, most objective functions in modern machine learning problems, especially those in deep learning, often have multiple or infinite numbers of optima. Therefore, it is useful to consider the problem of finding a set of diverse points in the optimum set of an objective function. In this work, we frame this problem as a bi-level optimization problem of maximizing a diversity score inside the optimum set of the main loss function, and solve it with a simple population gradient descent framework that iteratively updates the points to maximize the diversity score in a fashion that does not hurt the optimization of the main loss. We demonstrate that our method can efficiently generate diverse solutions on a variety of applications, including text-to-image generation, text-to-mesh generation, molecular conformation generation and ensemble neural network training.
翻译:尽管传统的优化方法侧重于寻找单一的最佳解决方案,但现代机器学习问题,特别是深层学习问题,大多数客观功能往往具有多种或无限的可选性。因此,有必要考虑在最佳客观功能的一组最佳功能中找到一套不同的点的问题。在这项工作中,我们将此问题定义为在最佳损失功能的一组最佳功能内最大限度地实现多样性分数的双层优化问题,并用简单的人口梯度下降框架加以解决,该框架反复更新了分数,以不损害主要损失的优化的方式最大限度地实现多样性分数。我们证明,我们的方法可以有效地为各种应用,包括文字到图像的生成、文字到图像的生成、分子相容生成和共性神经网络培训,产生多样化的解决方案。