Regularization is a well-established technique in machine learning (ML) to achieve an optimal bias-variance trade-off which in turn reduces model complexity and enhances explainability. To this end, some hyper-parameters must be tuned, enabling the ML model to accurately fit the unseen data as well as the seen data. In this article, the authors argue that the regularization of hyper-parameters and quantification of costs and risks of false alarms are in reality two sides of the same coin, explainability. Incorrect or non-existent estimation of either quantities undermines the measurability of the economic value of using ML, to the extent that might make it practically useless.
翻译:在机器学习中,正规化是一种既定的技术,目的是实现最佳的偏差取舍,从而降低模型复杂性,提高解释性,为此,必须调整一些超参数,使多参数模型能够准确适应无形数据和已见数据,作者在文章中认为,超参数的正规化和对成本和假警报风险的量化实际上是同一硬币的两面,即解释性,对数量的任何不正确或根本不存在的估计都会破坏使用多参数的经济价值的可衡量性,从而可能使其实际上毫无用处。