Bilevel Optimization Programming is used to model complex and conflicting interactions between agents, for example in Robust AI or Privacy-preserving AI. Integrating bilevel mathematical programming within deep learning is thus an essential objective for the Machine Learning community. Previously proposed approaches only consider single-level programming. In this paper, we extend existing single-level optimization programming approaches and thus propose Differentiating through Bilevel Optimization Programming (BiGrad) for end-to-end learning of models that use Bilevel Programming as a layer. BiGrad has wide applicability and can be used in modern machine learning frameworks. BiGrad is applicable to both continuous and combinatorial Bilevel optimization problems. We describe a class of gradient estimators for the combinatorial case which reduces the requirements in terms of computation complexity; for the case of the continuous variable, the gradient computation takes advantage of the push-back approach (i.e. vector-jacobian product) for an efficient implementation. Experiments show that the BiGrad successfully extends existing single-level approaches to Bilevel Programming.
翻译:BiGrad具有广泛适用性,并可用于现代机器学习框架。BiGrad适用于连续和组合双级优化问题。我们描述了组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式模式的梯度估计器,这降低了计算复杂性的要求;就连续变量而言,梯度计算利用双级优化式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合组合组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合式组合</s>