The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models. Their black-box nature leads to opaqueness debt phenomenon inflicting increased risks of discrimination, lack of reproducibility, and deflated performance due to data drift. To manage these risks, good MLOps practices ask for better validation of model performance and fairness, higher explainability, and continuous monitoring. The necessity of deeper model transparency appears not only from scientific and social domains, but also emerging laws and regulations on artificial intelligence. To facilitate the development of responsible machine learning models, we showcase dalex, a Python package which implements the model-agnostic interface for interactive model exploration. It adopts the design crafted through the development of various tools for responsible machine learning; thus, it aims at the unification of the existing solutions. This library's source code and documentation are available under open license at https://python.drwhy.ai/.
翻译:越来越多的可用数据、计算能力以及不断追求更高绩效的结果,使得预测模型越来越复杂。它们的黑盒性质导致不透明债务现象,增加了歧视风险、缺乏再复制和因数据漂移而减缩的性能。为了管理这些风险,良好的最低运作指标做法要求更好地验证模型的性能和公平性、更高的解释性和持续监测。更深层次的模型透明度的必要性不仅出现在科学和社会领域,而且出现在关于人工智能的新的法律和条例中。为了促进负责任的机器学习模型的开发,我们展示了Dalex,这是一个Python软件包,用于执行交互式模型探索的模型-不可知性界面。它采用通过开发各种负责任的机器学习工具而设计的设计;因此,它的目标是统一现有解决方案。图书馆的源代码和文件可以在https://pithon.drwhis.ai/公开许可证下查阅。