We have entered a new era of machine learning (ML), where the most accurate algorithm with superior predictive power may not even be deployable, unless it is admissible under the regulatory constraints. This has led to great interest in developing fair, transparent and trustworthy ML methods. The purpose of this article is to introduce a new information-theoretic learning framework (admissible machine learning) and algorithmic risk-management tools (InfoGram, L-features, ALFA-testing) that can guide an analyst to redesign off-the-shelf ML methods to be regulatory compliant, while maintaining good prediction accuracy. We have illustrated our approach using several real-data examples from financial sectors, biomedical research, marketing campaigns, and the criminal justice system.
翻译:我们进入了一个机器学习的新时代,在这个新时代里,最精确的预言能力最强的算法可能甚至无法部署,除非在规章限制下可以接受,这导致人们对制定公平、透明和可信赖的ML方法的极大兴趣。 本条的目的是引入一个新的信息理论学习框架(可接受的机器学习)和算法风险管理工具(InfoGram、L-fatures、ALFA-测试 ), 可以指导分析师重新设计现成的ML方法,使其符合规章,同时保持良好的预测准确性。 我们用金融部门、生物医学研究、营销运动和刑事司法系统等几个真实数据实例展示了我们的方法。