Image-to-image translation with Deep Learning neural networks, particularly with Generative Adversarial Networks (GANs), is one of the most powerful methods for simulating astronomical images. However, current work is limited to utilizing paired images with supervised translation, and there has been rare discussion on reconstructing noise background that encodes instrumental and observational effects. These limitations might be harmful for subsequent scientific applications in astrophysics. Therefore, we aim to develop methods for using unpaired images and preserving noise characteristics in image translation. In this work, we propose a two-way image translation model using GANs that exploits both paired and unpaired images in a semi-supervised manner, and introduce a noise emulating module that is able to learn and reconstruct noise characterized by high-frequency features. By experimenting on multi-band galaxy images from the Sloan Digital Sky Survey (SDSS) and the Canada France Hawaii Telescope Legacy Survey (CFHT), we show that our method recovers global and local properties effectively and outperforms benchmark image translation models. To our best knowledge, this work is the first attempt to apply semi-supervised methods and noise reconstruction techniques in astrophysical studies.
翻译:深学习神经网络的图像到图像翻译,尤其是General Adversarial 网络,是模拟天文图像的最有力方法之一。然而,目前的工作仅限于使用配对图像和受监督的翻译,而且很少讨论重建噪音背景,以编码工具效应和观测效应。这些限制可能有害于随后在天体物理学方面的科学应用。因此,我们的目标是制定在图像翻译中使用未受保护图像和维护噪音特性的方法。在这项工作中,我们提议采用双向图像翻译模型,使用GAN,以半超导方式利用配对图像和未受保护图像,并采用能学习和重建高频特征噪音的模拟模块。通过实验斯隆数字天空测量(SDSS)和加拿大夏威夷望远镜遗产测量(CFHT)的多波段星系图像,我们显示我们的方法能够有效恢复全球和地方特性,并超越基准图像翻译模型。对于我们的最佳知识来说,这项工作是应用最佳的模拟技术,作为半同步方法。