We introduce a novel deep learning algorithm for computing convex conjugates of differentiable convex functions, a fundamental operation in convex analysis with various applications in different fields such as optimization, control theory, physics and economics. While traditional numerical methods suffer from the curse of dimensionality and become computationally intractable in high dimensions, more recent neural network-based approaches scale better, but have mostly been studied with the aim of solving optimal transport problems and require the solution of complicated optimization or max-min problems. Using an implicit Fenchel formulation of convex conjugation, our approach facilitates an efficient gradient-based framework for the minimization of approximation errors and, as a byproduct, also provides a posteriori error estimates for the approximation quality. Numerical experiments demonstrate our method's ability to deliver accurate results across different high-dimensional examples. Moreover, by employing symbolic regression with Kolmogorov--Arnold networks, it is able to obtain the exact convex conjugates of specific convex functions.
翻译:我们提出了一种新颖的深度学习算法,用于计算可微凸函数的凸共轭——这是凸分析中的一个基本运算,在优化、控制理论、物理学和经济学等多个领域具有广泛应用。传统的数值方法受限于维度灾难,在高维情况下计算变得不可行;而较新的基于神经网络的方法虽具有更好的可扩展性,但主要被研究用于解决最优传输问题,且需要求解复杂的优化或最大最小问题。通过使用凸共轭的隐式芬切尔公式,我们的方法构建了一个高效的基于梯度的框架来最小化近似误差,并作为副产品,还为近似质量提供了后验误差估计。数值实验表明,我们的方法能够在不同的高维示例中提供准确结果。此外,通过采用基于科尔莫戈罗夫-阿诺德网络的符号回归,该方法能够获得特定凸函数的精确凸共轭。