Considering the large amount of available content, social media platforms increasingly employ machine learning (ML) systems to curate news. This paper examines how well different explanations help expert users understand why certain news stories are recommended to them. The expert users were journalists, who are trained to judge the relevance of news. Surprisingly, none of the explanations are perceived as helpful. Our investigation provides a first indication of a gap between what is available to explain ML-based curation systems and what users need to understand such systems. We call this the Explanatory Gap in Machine Learning-based Curation Systems.
翻译:考虑到现有的大量内容,社交媒体平台越来越多地利用机器学习(ML)系统来翻译新闻。本文审视了不同解释如何帮助专家用户理解为何向专家用户推荐某些新闻报道。专家用户是记者,他们受过判断新闻相关性的培训。令人惊讶的是,这些解释都无一被视作有用。我们的调查首次表明,在解释基于ML的翻译系统的现有内容与用户理解这些系统所需要的内容之间存在差距。我们称之为机器学习制度的解释性差距。