Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, their qualities such as fairness, accountability and transparency (FAT) are of paramount importance. To ensure high-quality, fair, transparent and reliable predictive systems, we developed an open source Python package called FAT Forensics. It can inspect important fairness, accountability and transparency aspects of predictive algorithms to automatically and objectively report them back to engineers and users of such systems. Our toolbox can evaluate all elements of a predictive pipeline: data (and their features), models and predictions. Published under the BSD 3-Clause open source licence, FAT Forensics is opened up for personal and commercial usage.
翻译:预测系统,特别是机器学习算法,可以对我们日常生活做出重要、有时是具有法律约束力的决定,但在大多数情况下,这些系统和决定既没有监管也没有认证。鉴于这些算法可能造成的潜在危害,这些算法的公平、问责和透明度等品质至关重要。为了确保高质量、公平、透明和可靠的预测系统,我们开发了一个开放源码的Python软件包,称为FAT法证。它可以检查预测算法的重要公平、问责和透明度方面,以便自动和客观地向工程师和这些系统的用户报告。我们的工具箱可以评估预测管道的所有要素:数据(及其特征)、模型和预测。根据BSD 3-Clause开放源许可证,FAT法证可以供个人和商业使用。