相较原始的GAN,DCGAN几乎完全使用了卷积层代替全链接层,判别器几乎是和生成器对称的,整个网络没有pooling层和上采样层的存在,实际上是使用了带步长(fractional-strided)的卷积代替了上采样,以增加训练的稳定性

VIP内容

论文题目

基于CGAN的人脸深度图估计: Face Depth Estimation With Conditional Generative Adversarial Networks

论文摘要

深度图的估计和单幅或多幅人脸图像的三维重建是计算机视觉的一个重要研究领域。在过去十年中提出和发展了许多方法。然而,像健壮性这样的问题仍然需要通过进一步的研究来解决。随着GPU计算方法的出现,卷积神经网络被应用到许多计算机视觉问题中。后来,条件生成对抗网络(CGAN)因其易于适应许多图像间问题而受到关注。CGANs已被广泛应用于各种任务,如背景掩蔽、分割、医学图像处理和超分辨率。在这项工作中,我们开发了一种基于GAN的方法来估计任何给定的单人脸图像的深度图。许多GANs的变体已经被测试用于的深度估计这项工作任务。我们的结论是,条件式瓦瑟斯坦GAN结构提供了最稳健的方法。我们还将该方法与其它两种基于深度学习和传统方法的方法进行了比较,实验结果表明,WGAN为从人脸图像中估计人脸深度图提供了很好的机会。

论文作者

Abdullah Taha Arslan,Erol Seke

关键字

三维人脸重建,生成对抗网络,深度学习

百度链接

链接: https://pan.baidu.com/s/13zk5uEeuGw7f5VyL9xAong 密码: 2bgb

成为VIP会员查看完整内容
0
17

最新内容

Generative Adversarial Networks (GANs) have emerged as useful generative models, which are capable of implicitly learning data distributions of arbitrarily complex dimensions. However, the training of GANs is empirically well-known for being highly unstable and sensitive. The loss functions of both the discriminator and generator concerning their parameters tend to oscillate wildly during training. Different loss functions have been proposed to stabilize the training and improve the quality of images generated. In this paper, we perform an empirical study on the impact of several loss functions on the performance of standard GAN models, Deep Convolutional Generative Adversarial Networks (DCGANs). We introduce a new improvement that employs a relativistic discriminator to replace the classical deterministic discriminator in DCGANs and implement a margin cosine loss function for both the generator and discriminator. This results in a novel loss function, namely Relativistic Margin Cosine Loss (RMCosGAN). We carry out extensive experiments with four datasets: CIFAR-$10$, MNIST, STL-$10$, and CAT. We compare RMCosGAN performance with existing loss functions based on two metrics: Frechet inception distance and inception score. The experimental results show that RMCosGAN outperforms the existing ones and significantly improves the quality of images generated.

0
0
下载
预览

最新论文

Generative Adversarial Networks (GANs) have emerged as useful generative models, which are capable of implicitly learning data distributions of arbitrarily complex dimensions. However, the training of GANs is empirically well-known for being highly unstable and sensitive. The loss functions of both the discriminator and generator concerning their parameters tend to oscillate wildly during training. Different loss functions have been proposed to stabilize the training and improve the quality of images generated. In this paper, we perform an empirical study on the impact of several loss functions on the performance of standard GAN models, Deep Convolutional Generative Adversarial Networks (DCGANs). We introduce a new improvement that employs a relativistic discriminator to replace the classical deterministic discriminator in DCGANs and implement a margin cosine loss function for both the generator and discriminator. This results in a novel loss function, namely Relativistic Margin Cosine Loss (RMCosGAN). We carry out extensive experiments with four datasets: CIFAR-$10$, MNIST, STL-$10$, and CAT. We compare RMCosGAN performance with existing loss functions based on two metrics: Frechet inception distance and inception score. The experimental results show that RMCosGAN outperforms the existing ones and significantly improves the quality of images generated.

0
0
下载
预览
Top