Machine Learning (ML) and its applications have been transforming our lives but it is also creating issues related to the development of fair, accountable, transparent, and ethical Artificial Intelligence. As the ML models are not fully comprehensible yet, it is obvious that we still need humans to be part of algorithmic decision-making processes. In this paper, we consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop. We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high and the lack of appropriate data to model the association between the target tasks and the input features. With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload by replacing data annotation with rule representation editing. The approach may also help remove algorithmic bias by introducing experts' feedback into the iterative model learning process.
翻译:机器学习(ML)及其应用一直在改变我们的生活,但它也正在产生与发展公平、负责、透明和道德的人工智能有关的问题。由于ML模型尚未完全理解,显然我们仍然需要人成为算法决策过程的一部分。在本文中,我们考虑一个ML框架,通过将人类专家纳入模型学习循环,可以加速模型学习并改善其可解释性。我们提议了一个新的人际流动ML框架,旨在处理学习问题,即数据注解成本高,缺乏适当数据来模拟目标任务与输入特征之间的联系。我们实验结果显示,在应用精确剂量时,该方法可以从数据中学习可解释的规则,并可能降低专家的工作量,用规则代表编辑取代数据注解。这个方法还可以帮助消除算法上的偏差,将专家反馈引入迭代模式学习过程。