Modelling the impact of a material's mesostructure on device level performance typically requires access to 3D image data containing all the relevant information to define the geometry of the simulation domain. This image data must include sufficient contrast between phases to distinguish each material, be of high enough resolution to capture the key details, but also have a large enough field-of-view to be representative of the material in general. It is rarely possible to obtain data with all of these properties from a single imaging technique. In this paper, we present a method for combining information from pairs of distinct but complementary imaging techniques in order to accurately reconstruct the desired multi-phase, high resolution, representative, 3D images. Specifically, we use deep convolutional generative adversarial networks to implement super-resolution, style transfer and dimensionality expansion. To demonstrate the widespread applicability of this tool, two pairs of datasets are used to validate the quality of the volumes generated by fusing the information from paired imaging techniques. Three key mesostructural metrics are calculated in each case to show the accuracy of this method. Having confidence in the accuracy of our method, we then demonstrate its power by applying to a real data pair from a lithium ion battery electrode, where the required 3D high resolution image data is not available anywhere in the literature. We believe this approach is superior to previously reported statistical material reconstruction methods both in terms of its fidelity and ease of use. Furthermore, much of the data required to train this algorithm already exists in the literature, waiting to be combined. As such, our open-access code could precipitate a step change by generating the hard to obtain high quality image volumes necessary to simulate behaviour at the mesoscale.
翻译:建模材料的中间结构对设备级别性能的影响通常要求获取包含所有相关信息的3D图像数据,以定义模拟域的几何。这种图像数据必须包含对各个阶段的足够对比,以区分每种材料,具有足够高的分辨率以捕捉关键细节,但也具有足以代表一般材料的足够大视野。很少可能从单一成像技术获得所有这些属性的数据。在本文中,我们提出了一个将不同但互补成像技术组合的信息合并的方法,以便准确重建所需的多阶段、高分辨率、代表性、3D图像。具体地说,我们使用深相形色色色的基因对抗网络来区分每个阶段,以便区分每个阶段,以便区分每个阶段,以便准确重建所需的多阶段、高分辨率、有代表性、有代表性的3D图像。为了证明这一工具的广泛适用性,两组数据集用来验证通过使用配对成成成成成成成成像技术生成的信息所产生的数量的质量。在每种情况下,三个关键的中间结构度度度度测量方法的准确性能显示这一方法的准确性。由于对我们方法的精度有把握,因此,我们随后以精度来显示其精度的精度的精度对抗性对抗网络对抗网络对抗网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络网络, 。我们将将其应用到高数据。