Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning or learning with non-standard aggregated losses. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters $w\in\mathcal{W}$ and the maximization over the empirical distribution $p\in\mathcal{K}$ of the training set indexes, where $\mathcal{K}$ is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of $\mathcal{K}$ and propose two properties of $\mathcal{K}$ that facilitate designing efficient algorithms. We focus on a specific family of sets $\mathcal{S}_{n,k}$ encompassing various learning applications and provide high-probability convergence guarantees to the minimax values.
翻译:几个学习问题涉及解决微积分问题,例如实证分布式强力学习或以非标准总损失进行学习。更具体地说,这些问题是:在模型参数中最小化为$w\in\mathcal{W}$,对经验分配为$p\in\mathcal{K}$进行最大化,对培训成套指数的美元分配为$mathcal{K}$是简单x或其中的一个子集。为了设计有效的方法,我们允许在线学习算法对(cominator)强盗算法进行游戏。我们争辩说,这些方法的效率关键取决于$\mathcal{K}美元的结构,并提出两个能便利设计高效算法的属性$\mathcal{K}$。我们集中关注一个包含各种学习应用程序的具体的家族,并为迷你马克值提供高概率趋同保证。