A new, machine learning-based approach for automatically generating 3D digital geometries of woven composite textiles is proposed to overcome the limitations of existing analytical descriptions and segmentation methods. In this approach, panoptic segmentation is leveraged to produce instance segmented semantic masks from X-ray computed tomography (CT) images. This effort represents the first deep learning based automated process for segmenting unique yarn instances in a woven composite textile. Furthermore, it improves on existing methods by providing instance-level segmentation on low contrast CT datasets. Frame-to-frame instance tracking is accomplished via an intersection-over-union (IoU) approach adopted from video panoptic segmentation for assembling a 3D geometric model. A corrective recognition algorithm is developed to improve the recognition quality (RQ). The panoptic quality (PQ) metric is adopted to provide a new universal evaluation metric for reconstructed woven composite textiles. It is found that the panoptic segmentation network generalizes well to new CT images that are similar to the training set but does not extrapolate well to CT images of differing geometry, texture, and contrast. The utility of this approach is demonstrated by capturing yarn flow directions, contact regions between individual yarns, and the spatially varying cross-sectional areas of the yarns.
翻译:为克服现有分析描述和分解方法的局限性,建议采用基于机器学习的新方法,自动生成3D合成纺织品数字外观,以克服现有分析描述和分解方法的局限性。在这一方法中,利用全光截断法,从X光计算透析图像中产生外分解的语义遮罩。这一努力代表了在织合成纺织品中分解独特线条事件的第一种基于深层学习的自动化过程。此外,它还改进了现有方法,在低对比的CT数据集上提供例级分解。框架对框架实例的跟踪是通过从视频全光截断法(IoU)的交叉连接法(IoU)完成的。从视频全光截断法(IoU)中为组装3D几何度模型而采用。开发了一种纠正性识别算法,以提高识别质量(RQ)。采用了全光质量(PQ)衡量标准,为重新编织成的复合纺织品提供新的通用评价基准。发现光截网对新的CT图象图象图象与培训数据集相类似,但并不超出CT图象图象图象图象图象图象,而是不同地理、纹图象图解图、图解图象区域之间的图象之间的图象图,通过不同的图象图象图象图象图象图象图象取。