Automatic defect detection for 3D printing processes, which shares many characteristics with change detection problems, is a vital step for quality control of 3D printed products. However, there are some critical challenges in the current state of practice. First, existing methods for computer vision-based process monitoring typically work well only under specific camera viewpoints and lighting situations, requiring expensive pre-processing, alignment, and camera setups. Second, many defect detection techniques are specific to pre-defined defect patterns and/or print schematics. In this work, we approach the automatic defect detection problem differently using a novel Semi-Siamese deep learning model that directly compares a reference schematic of the desired print and a camera image of the achieved print. The model then solves an image segmentation problem, identifying the locations of defects with respect to the reference frame. Unlike most change detection problems, our model is specially developed to handle images coming from different domains and is robust against perturbations in the imaging setup such as camera angle and illumination. Defect localization predictions were made in 2.75 seconds per layer using a standard MacBookPro, which is comparable to the typical tens of seconds or less for printing a single layer on an inkjet-based 3D printer, while achieving an F1-score of more than 0.9.
翻译:3D 打印过程的自动缺陷检测与变化检测问题有许多特点,对于3D 印刷品的质量控制来说,这是一个重要的步骤,具有许多差异检测问题,这是3D 印刷品质量控制的一个重要步骤。然而,在目前的做法状态下,存在着一些关键的挑战。首先,基于计算机视觉过程监测的现有方法通常只在特定的相机视图和照明情况下运作良好,需要花费昂贵的预处理、校对和摄像设置。第二,许多缺陷检测技术是预设的缺陷模式和/或印刷图象所特有的。在这项工作中,我们使用一个新的半Siames深度学习模型对自动缺陷检测问题采取不同的做法,该模型直接比较了理想印刷品的参考图示和相机图像图像。然后该模型解决了图像分割问题,确定了参考框架的缺陷位置。与大多数变化检测问题不同,我们的模型是专门开发的,用于处理不同领域图像的预设缺陷和/或印刷图象设置中的扰动,例如摄像角度和照明。在每层中用2.75秒的本地化预测,使用标准的 MacBookPro,该模型比典型的10秒或0.1 打印一个单一的打印机要高出0.1。