Deep generative models are known to be able to model arbitrary probability distributions. Among these, a recent deep generative model, dubbed sliceGAN, proposed a new way of using the generative adversarial network (GAN) to capture the micro-structural characteristics of a two-dimensional (2D) slice and generate three-dimensional (3D) volumes with similar properties. While 3D micrographs are largely beneficial in simulating diverse material behavior, they are often much harder to obtain than their 2D counterparts. Hence, sliceGAN opens up many interesting directions of research by learning the representative distribution from 2D slices, and transferring the learned knowledge to generate arbitrary 3D volumes. However, one limitation of sliceGAN is that latent space steering is not possible. Hence, we combine sliceGAN with AdaIN to endow the model with the ability to disentangle the features and control the synthesis.
翻译:众所周知,深基因模型能够模拟任意概率分布,其中,最近的一个深基因模型,称为切片GAN,提出了使用基因对抗网络(GAN)的新方法,以捕捉二维(2D)切片的微观结构特征,产生具有类似特性的三维(3D)体积。虽然3D显微镜在模拟多种物质行为方面大有裨益,但往往比2D相更难获得。因此,切片GAN从2D切片中学习代表性分布,并传授知识以产生任意的3D体积,从而开辟了许多有趣的研究方向。然而,切片GAN的一个局限性是潜伏空间方向是不可能的。因此,我们把切片GAN和AdaIN结合起来,以便把模型与特征分离和控制合成的能力结合起来。