We develop new parameter and scale-free algorithms for solving convex-concave saddle-point problems. Our results are based on a new simple regret minimizer, the Conic Blackwell Algorithm$^+$ (CBA$^+$), which attains $O(1/\sqrt{T})$ average regret. Intuitively, our approach generalizes to other decision sets of interest ideas from the Counterfactual Regret minimization (CFR$^+$) algorithm, which has very strong practical performance for solving sequential games on simplexes. We show how to implement CBA$^+$ for the simplex, $\ell_{p}$ norm balls, and ellipsoidal confidence regions in the simplex, and we present numerical experiments for solving matrix games and distributionally robust optimization problems. Our empirical results show that CBA$^+$ is a simple algorithm that outperforms state-of-the-art methods on synthetic data and real data instances, without the need for any choice of step sizes or other algorithmic parameters.
翻译:我们开发了新的参数和无比例值算法来解决 convex- conculve ship- pold-point 问题。 我们的结果基于一个新的简单的最小遗憾最小化器 — — Conic Blackwell Algorithm$ $ $( CBA$ $ $ ), 达到美元( $1 /\\ sqrt{T} $ ) 的平均遗憾。 直观地说, 我们的方法将反事实最小化( CFR$ $ $ $ ) 算法中的其他决定性利益概念概括化为普通最小化( comfactal Regret 最小化( CFR$ $ $ $ $ ) 。 我们展示了如何在简单x 、 $\\ ell\ } 标准球和 线性信任区执行 CBBA$ $ 。 我们展示了用于解决矩阵游戏和分布强度优化优化问题的数字实验。 我们的经验结果表明, CBA$ 是一个简单的算法, 超越合成数据和真实数据中的最新方法,, 并不需要任何步骤大小的选择。