TryOnGAN is a recent virtual try-on approach, which generates highly realistic images and outperforms most previous approaches. In this article, we reproduce the TryOnGAN implementation and probe it along diverse angles: impact of transfer learning, variants of conditioning image generation with poses and properties of latent space interpolation. Some of these facets have never been explored in literature earlier. We find that transfer helps training initially but gains are lost as models train longer and pose conditioning via concatenation performs better. The latent space self-disentangles the pose and the style features and enables style transfer across poses. Our code and models are available in open source.
翻译:TryOnGAN是最近的一种虚拟试演方法,它生成了高度现实的图像,并且优于大多数以往的方法。在本篇文章中,我们复制了 TryOnGAN 执行程序,并用不同角度来研究它:转移学习的影响、用潜在空间内插的外形和特性调节图像生成的变体和潜在空间内插特性。其中一些方面在早期文献中从未探讨过。我们发现,转移最初有助于培训,但随着模型培训时间更长,通过凝聚形成调节效果更好,收益却丢失了。潜伏空间的自我分解作用和风格特征,并使得风格的跨组合转移成为可能。我们的代码和模型可以在开放源代码中找到。