The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally. As I discuss in this chapter, the importance of the theorems arises by using them to analyze scenarios involving {non-uniform} distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that {anti}-cross-validation (choosing among a set of candidate algorithms based on which has {worst} out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption -- which has never been formalized -- about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a ``dictionary'' between supervised learning and improve blackbox optimization, which allows one to ``translate'' techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms. In addition to these topics, I also briefly discuss their implications for philosophy of science.
翻译:无免费午餐理论证明,在对诱导问题(研究问题或学习问题)的统一分配下,所有入门算法都同样发挥作用。正如我在本章中所讨论的那样,除非用这些理论来分析涉及{非统一}分发的假设情景,并比较不同的算法,而不考虑问题的分配情况,从而证明这些理论的重要性。特别是,这些理论证明,在一系列候选人算法(根据一套候选人算法选择一种具有{worst}出自抽样行为)的统一分配办法)中,所有入门算法都发挥同样的作用。除非人们假设-这种假设从未正式确定过-对诱导问题的分配如何与一套在使用(反)交叉验证之间选择的算法相联系。此外,这些理论证明,在确定某种特定算法的强度而不必假定某种特定分配的情况下,对确定某种特定算法的强度的许多结果具有强烈的洞察力。它们也激励了“字典”我从监督性学习到改进黑箱最优化的理论之间的一个空间,从而使得这些算法能够监督性地讨论这些最优化的理论。