Machine learning (ML)-based cyber-physical systems (CPSs) have been extensively developed to improve the print quality of additive manufacturing (AM). However, the reproducibility of these systems, as presented in published research, has not been thoroughly investigated due to a lack of formal evaluation methods. Reproducibility, a critical component of trustworthy artificial intelligence, is achieved when an independent team can replicate the findings or artifacts of a study using a different experimental setup and achieve comparable performance. In many publications, critical information necessary for reproduction is often missing, resulting in systems that fail to replicate the reported performance. This paper proposes a reproducibility investigation pipeline and a reproducibility checklist for ML-based process monitoring and quality prediction systems for AM. The pipeline guides researchers through the key steps required to reproduce a study, while the checklist systematically extracts reproducibility-relevant information from the publication. We validated the proposed approach through two case studies: reproducing a fused filament fabrication warping detection system and a laser powder bed fusion melt pool area prediction model. Both case studies confirmed that the pipeline and checklist successfully identified missing information, improved reproducibility, and enhanced the performance of reproduced systems. Based on the proposed checklist, a reproducibility survey was conducted to assess the current reproducibility status within this research domain. By addressing this research gap, the proposed methods aim to enhance trustworthiness and rigor in ML-based AM research, with potential applicability to other ML-based CPSs.
翻译:暂无翻译