WGAN主要从损失函数的角度对GAN做了改进,损失函数改进之后的WGAN即使在全链接层上也能得到很好的表现结果。

VIP内容

论文题目

基于CGAN的人脸深度图估计: Face Depth Estimation With Conditional Generative Adversarial Networks

论文摘要

深度图的估计和单幅或多幅人脸图像的三维重建是计算机视觉的一个重要研究领域。在过去十年中提出和发展了许多方法。然而,像健壮性这样的问题仍然需要通过进一步的研究来解决。随着GPU计算方法的出现,卷积神经网络被应用到许多计算机视觉问题中。后来,条件生成对抗网络(CGAN)因其易于适应许多图像间问题而受到关注。CGANs已被广泛应用于各种任务,如背景掩蔽、分割、医学图像处理和超分辨率。在这项工作中,我们开发了一种基于GAN的方法来估计任何给定的单人脸图像的深度图。许多GANs的变体已经被测试用于的深度估计这项工作任务。我们的结论是,条件式瓦瑟斯坦GAN结构提供了最稳健的方法。我们还将该方法与其它两种基于深度学习和传统方法的方法进行了比较,实验结果表明,WGAN为从人脸图像中估计人脸深度图提供了很好的机会。

论文作者

Abdullah Taha Arslan,Erol Seke

关键字

三维人脸重建,生成对抗网络,深度学习

百度链接

链接: https://pan.baidu.com/s/13zk5uEeuGw7f5VyL9xAong 密码: 2bgb

成为VIP会员查看完整内容
0
18

最新内容

Inspired by ideas from optimal transport theory we present Trust the Critics (TTC), a new algorithm for generative modelling. This algorithm eliminates the trainable generator from a Wasserstein GAN; instead, it iteratively modifies the source data using gradient descent on a sequence of trained critic networks. This is motivated in part by the misalignment which we observed between the optimal transport directions provided by the gradients of the critic and the directions in which data points actually move when parametrized by a trainable generator. Previous work has arrived at similar ideas from different viewpoints, but our basis in optimal transport theory motivates the choice of an adaptive step size which greatly accelerates convergence compared to a constant step size. Using this step size rule, we prove an initial geometric convergence rate in the case of source distributions with densities. These convergence rates cease to apply only when a non-negligible set of generated data is essentially indistinguishable from real data. Resolving the misalignment issue improves performance, which we demonstrate in experiments that show that given a fixed number of training epochs, TTC produces higher quality images than a comparable WGAN, albeit at increased memory requirements. In addition, TTC provides an iterative formula for the transformed density, which traditional WGANs do not. Finally, TTC can be applied to map any source distribution onto any target; we demonstrate through experiments that TTC can obtain competitive performance in image generation, translation, and denoising without dedicated algorithms.

0
0
下载
预览

最新论文

Inspired by ideas from optimal transport theory we present Trust the Critics (TTC), a new algorithm for generative modelling. This algorithm eliminates the trainable generator from a Wasserstein GAN; instead, it iteratively modifies the source data using gradient descent on a sequence of trained critic networks. This is motivated in part by the misalignment which we observed between the optimal transport directions provided by the gradients of the critic and the directions in which data points actually move when parametrized by a trainable generator. Previous work has arrived at similar ideas from different viewpoints, but our basis in optimal transport theory motivates the choice of an adaptive step size which greatly accelerates convergence compared to a constant step size. Using this step size rule, we prove an initial geometric convergence rate in the case of source distributions with densities. These convergence rates cease to apply only when a non-negligible set of generated data is essentially indistinguishable from real data. Resolving the misalignment issue improves performance, which we demonstrate in experiments that show that given a fixed number of training epochs, TTC produces higher quality images than a comparable WGAN, albeit at increased memory requirements. In addition, TTC provides an iterative formula for the transformed density, which traditional WGANs do not. Finally, TTC can be applied to map any source distribution onto any target; we demonstrate through experiments that TTC can obtain competitive performance in image generation, translation, and denoising without dedicated algorithms.

0
0
下载
预览
参考链接
Top