The use of accurate scanning transmission electron microscopy (STEM) image simulation methods require large computation times that can make their use infeasible for the simulation of many images. Other simulation methods based on linear imaging models, such as the convolution method, are much faster but are too inaccurate to be used in application. In this paper, we explore deep learning models that attempt to translate a STEM image produced by the convolution method to a prediction of the high accuracy multislice image. We then compare our results to those of regression methods. We find that using the deep learning model Generative Adversarial Network (GAN) provides us with the best results and performs at a similar accuracy level to previous regression models on the same dataset. Codes and data for this project can be found in this GitHub repository, https://github.com/uw-cmg/GAN-STEM-Conv2MultiSlice.
翻译:使用精确扫描传输电子显微镜(STEM)图像模拟方法需要很长的计算时间才能使其无法用于模拟许多图像。其他基于线性成像模型的模拟方法,如卷发法,速度要快得多,但太不准确,无法应用。在本文中,我们探索了深层次的学习模型,这些模型试图将卷发法产生的STEM图像转化为对高精度多切图像的预测。然后我们将我们的结果与回归方法的结果进行比较。我们发现,使用深层次的学习模型Genemental Aversarial网络(GAN)为我们提供了最佳的结果,并在与以前在同一数据集的回归模型相似的精度水平上进行演化。该项目的代码和数据可见于GitHub存储库, https://github.com/uw-cm/GAN-STEM-Conv2MultiSlice。