In socio-technical settings, operators are increasingly assisted by decision support systems. By employing these, important properties of socio-technical systems such as self-adaptation and self-optimization are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this paper, we propose the usage of Learning Classifier Systems, a family of rule-based machine learning methods, to facilitate transparent decision making and highlight some techniques to improve that. We then present a template of seven questions to assess application-specific explainability needs and demonstrate their usage in an interview-based case study for a manufacturing scenario. We find that the answers received did yield useful insights for a well-designed LCS model and requirements to have stakeholders actively engage with an intelligent agent.
翻译:在社会技术环境中,操作者越来越多地得到决策支持系统的协助,通过使用这些决定支持系统,社会技术系统的重要特性,如自我适应和自我优化等,可望得到进一步改善。为了得到操作者接受并有效地与操作者接触,决策支持系统必须能够解释具体决定背后的理由。在本文件中,我们提议使用学习分类系统,即一套基于规则的机器学习方法,以促进透明的决策,并突出一些改进技术。然后,我们提出七个问题的模板,以评估具体应用的解释需求,并在基于访谈的制造情景案例研究中展示其使用情况。我们发现,所收到的答复确实为设计良好的LCS模式和要求提供了有益的见解,以便让利益攸关方与智能剂积极接触。