Generative models have been widely proposed in image recognition to generate more images where the distribution is similar to that of the real ones. It often introduces a discriminator network to differentiate the real data from the generated ones. Such models utilise a discriminator network tasked with differentiating style transferred data from data contained in the target dataset. However in doing so the network focuses on discrepancies in the intensity distribution and may overlook structural differences between the datasets. In this paper we formulate a new image-to-image translation problem to ensure that the structure of the generated images is similar to that in the target dataset. We propose a simple, yet powerful Structure-Unbiased Adversarial (SUA) network which accounts for both intensity and structural differences between the training and test sets when performing image segmentation. It consists of a spatial transformation block followed by an intensity distribution rendering module. The spatial transformation block is proposed to reduce the structure gap between the two images, and also produce an inverse deformation field to warp the final segmented image back. The intensity distribution rendering module then renders the deformed structure to an image with the target intensity distribution. Experimental results show that the proposed SUA method has the capability to transfer both intensity distribution and structural content between multiple datasets.
翻译:在图像识别中广泛提出了产生更多图像的生成模型,使图像的分布与真实图像的分布相类似。 它通常引入一个区分网络, 以区分真实数据与生成的数据。 这种模型使用一个区分网络, 负责区分从目标数据集中所含数据传输的数据的样式。 但是, 在这样做时, 网络侧重于强度分布的差异, 并可能忽略数据集之间的结构差异。 在本文中, 我们开发一个新的图像到图像图像转换程序, 以确保生成图像的结构与目标数据集中的结构相似。 我们建议了一个简单而有力的结构- 不可忽视的 Adversarial (SUA) 网络, 在进行图像分割时, 区分培训和测试组之间存在的强度和结构差异。 它由空间转换块组成, 并随后形成一个密度分布模块。 空间转换模块旨在缩小两个图像之间的结构差距, 并产生一个反向扭曲的字段, 将最终的片段图像折回。 显示模块的强度分布使结构变形结构与目标强度分布相。 实验结果显示, 将数据结构分布能力转换为多层次。