Artificial Intelligence (AI) and its data-centric branch of machine learning (ML) have greatly evolved over the last few decades. However, as AI is used increasingly in real world use cases, the importance of the interpretability of and accessibility to AI systems have become major research areas. The lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms. This is due to many reasons including ethical and regulatory concerns, which have resulted in poorer adoption of ML in some areas. The recent past has seen a surge in research on interpretable ML. Generally, designing a ML system requires good domain understanding combined with expert knowledge. New techniques are emerging to improve ML accessibility through automated model design. This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems while also being relevant to developing countries. We review work under multiple levels of interpretability including scientific and mathematical interpretation, statistical interpretation and partial semantic interpretation. This review includes applications in three areas, namely food processing, agriculture and health.
翻译:在过去几十年中,人工智能及其以数据为中心的机器学习分支(ML)发生了巨大变化,然而,随着AI越来越多地用于现实世界的使用案例,AI系统可解释性和可获取性的重要性已成为主要研究领域;基于ML的系统缺乏可解释性是广泛采用这些强大算法的主要障碍,其原因很多,包括伦理和监管方面的关切,导致在某些领域采用ML的情况较差;最近对可解释ML的研究激增。一般而言,设计ML系统需要与专家知识相结合的良好领域理解;正在出现新技术,通过自动化模型设计改善ML的可获取性;本文件回顾了在全球问题背景下为改善机器学习的可解释性和可获取性所做的工作,同时也与发展中国家有关。我们审查了多种可解释性的工作,包括科学和数学解释、统计解释和部分语义解释。本审查包括食品加工、农业和健康三个领域的应用。