Software developers often submit questions to technical Q&A sites like Stack Overflow (SO) to resolve code-level problems. In practice, they include example code snippets with questions to explain the programming issues. Existing research suggests that users attempt to reproduce the reported issues using given code snippets when answering questions. Unfortunately, such code snippets could not always reproduce the issues due to several unmet challenges that prevent questions from receiving appropriate and prompt solutions. One previous study investigated reproducibility challenges and produced a catalog. However, how the practitioners perceive this challenge catalog is unknown. Practitioners' perspectives are inevitable in validating these challenges and estimating their severity. This study first surveyed 53 practitioners to understand their perspectives on reproducibility challenges. We attempt to (a) see whether they agree with these challenges, (b) determine the impact of each challenge on answering questions, and (c) identify the need for tools to promote reproducibility. Survey results show that - (a) about 90% of the participants agree with the challenges, (b) "missing an important part of code" most severely hurt reproducibility, and (c) participants strongly recommend introducing automated tool support to promote reproducibility. Second, we extract \emph{nine} code-based features (e.g., LOC, compilability) and build five Machine Learning (ML) models to predict issue reproducibility. Early detection might help users improve code snippets and their reproducibility. Our models achieve 84.5% precision, 83.0% recall, 82.8% F1-score, and 82.8% overall accuracy, which are highly promising. Third, we systematically interpret the ML model and explain how code snippets with reproducible issues differ from those with irreproducible issues.
翻译:暂无翻译