We examine the disconnect between scholarship and practice in applying machine learning to trust and safety problems, using misinformation detection as a case study. We systematize literature on automated detection of misinformation across a corpus of 270 well-cited papers in the field. We then examine subsets of papers for data and code availability, design missteps, reproducibility, and generalizability. Our paper corpus includes published work in security, natural language processing, and computational social science. Across these disparate disciplines, we identify common errors in dataset and method design. In general, detection tasks are often meaningfully distinct from the challenges that online services actually face. Datasets and model evaluation are often non-representative of real-world contexts, and evaluation frequently is not independent of model training. Data and code availability is poor. We demonstrate the limitations of current detection methods in a series of three replication studies. Based on the results of these analyses and our literature survey, we offer recommendations for evaluating applications of machine learning to trust and safety problems in general. Our aim is for future work to avoid the pitfalls that we identify.
翻译:暂无翻译