The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
翻译:人工智能的进步正在创造新的机会来改善世界各地人们的生活,从商业到保健,从生活方式到教育,从商业到保健,从生活方式到教育,例如,一些系统向用户介绍其人口和行为特征,以便使用其人口和行为特征作出某些特定领域的预测。这种预测往往直接或间接地影响用户的生活(例如贷款付款、确定保险覆盖范围、短名单应用程序等)。因此,对这种由AI支持的系统的关切也在增加。为了解决这些关切,这些系统的任务是负责,即透明、公平、可以向开发者和最终用户解释。在本文件中,我们提出了ComplalAI,这是一个独特的框架,使、观察、分析和量化解释性、稳健、性、性能、公平性和模型行为在漂移情景中能够直接或间接地影响用户的生活生活。因此,一个单一的信托系数,用来评价不同的受监督的机械学习模型,不仅仅是从它们的能力来作出正确的预测,而是从总体责任角度来评估。这个框架有助于用户(a)将其模型和解释能力联系起来,(b)评估和视觉化模型的不同方面,例如稳健健、易移动感和公平性,因此,从一个不同的机能模型,从一个不同的机型模型的角度来比较一个不同的机能模型,从一个不同的模型,从一个不同的模型,从一个不同的模型到一个不同的系统,从一个不同的模型,从一个不同的模型,从一个不同的模型到一个不同的模型,从一个不同的模型,从一个不同的模型到一个不同的模型,从一个不同的模型到一个不同的模型,从一个不同的模型,从一个不同的模型,从一个不同的分析一个不同的模型,从一个不同的分析一个不同的模型,从一个不同的模型到一个不同的模型到一个不同的模型到一个不同的模型,从一个不同的模型,从一个不同的系统。