The application of computer vision and machine learning methods in the field of additive manufacturing (AM) for semantic segmentation of the structural elements of 3-D printed products will improve real-time failure analysis systems and can potentially reduce the number of defects by enabling in situ corrections. This work demonstrates the possibilities of using physics-based rendering for labeled image dataset generation, as well as image-to-image translation capabilities to improve the accuracy of real image segmentation for AM systems. Multi-class semantic segmentation experiments were carried out based on the U-Net model and cycle generative adversarial network. The test results demonstrated the capacity of detecting such structural elements of 3-D printed parts as a top layer, infill, shell, and support. A basis for further segmentation system enhancement by utilizing image-to-image style transfer and domain adaptation technologies was also developed. The results indicate that using style transfer as a precursor to domain adaptation can significantly improve real 3-D printing image segmentation in situations where a model trained on synthetic data is the only tool available. The mean intersection over union (mIoU) scores for synthetic test datasets included 94.90% for the entire 3-D printed part, 73.33% for the top layer, 78.93% for the infill, 55.31% for the shell, and 69.45% for supports.
翻译:在添加剂制造(AM)领域应用计算机视觉和机器学习方法对3D印刷品结构要素进行语义分解,将改进实时故障分析系统,并有可能通过现场校正减少缺陷数量。这项工作展示了利用物理制成来制作贴标签图像数据集的可能性,以及图像到图像翻译能力,以提高AM系统真实图像分解的准确性。根据U-Net模型和循环基因对抗网络,进行了多级语义分解实验。测试结果表明,通过利用图像到图像风格的转移和领域适应技术,可以检测3D印刷部件的结构要素,可以发现3D印刷品结构要素作为顶层、填充、贝壳和支持的这种结构要素。还开发了利用图像到图像风格的转换和图像到图像转换技术来进一步加强分解系统的基础。结果显示,在合成数据培训模型是唯一可用工具的情况下,将3D印刷图像分解成3-D真实分解。合成数据的平均交叉比(MIOU)分数显示,合成测试数据集的顶层、填、填充、填充、整印至3D层的78-33%,支持最高压层的78-33%,支持最高压层的78-33%和93%。