Two main concepts studied in machine learning theory are generalization gap (difference between train and test error) and excess risk (difference between test error and the minimum possible error). While information-theoretic tools have been used extensively to study the generalization gap of learning algorithms, the information-theoretic nature of excess risk has not yet been fully investigated. In this paper, some steps are taken toward this goal. We consider the frequentist problem of minimax excess risk as a zero-sum game between algorithm designer and the world. Then, we argue that it is desirable to modify this game in a way that the order of play can be swapped. We prove that, under some regularity conditions, if the world and designer can play randomly the duality gap is zero and the order of play can be changed. In this case, a Bayesian problem surfaces in the dual representation. This makes it possible to utilize recent information-theoretic results on minimum excess risk in Bayesian learning to provide bounds on the minimax excess risk. We demonstrate the applicability of the results by providing information theoretic insight on two important classes of problems: classification when the hypothesis space has finite VC-dimension, and regularized least squares.
翻译:在机器学习理论中研究的两个主要概念是一般化差距(火车与测试错误之间的差异)和超风险(测试错误与最低可能错误之间的差异)。虽然信息理论工具被广泛用于研究学习算法的通用差距,但过度风险的信息理论性质尚未得到充分调查。在本文件中,为实现这一目标采取了一些步骤。我们认为小型最大超风险的常见问题是算法设计师和世界之间的零和零和游戏。然后,我们认为,我们有必要修改这一游戏,使游戏的顺序能够互换。我们证明,在某些常规条件下,如果世界和设计师可以随机地玩双性差距为零,游戏的顺序也可以改变。在本案中,巴伊斯问题呈现在双重代表中。这使我们能够利用最近关于巴伊西亚学习中最低超风险的信息理论结果,以提供微缩超风险的界限。我们通过在两个重要问题的常规类别上提供信息理论性洞察力来证明结果的可适用性:在假设时,空间的定位时,V-drial-lad-ladi-ladi-lad-lag-lag-lagen lag-s