Arbitrary Style Transfer is a technique used to produce a new image from two images: a content image, and a style image. The newly produced image is unseen and is generated from the algorithm itself. Balancing the structure and style components has been the major challenge that other state-of-the-art algorithms have tried to solve. Despite all the efforts, it's still a major challenge to apply the artistic style that was originally created on top of the structure of the content image while maintaining consistency. In this work, we solved these problems by using a Deep Learning approach using Convolutional Neural Networks. Our implementation will first extract foreground from the background using the pre-trained Detectron 2 model from the content image, and then apply the Arbitrary Style Transfer technique that is used in SANet. Once we have the two styled images, we will stitch the two chunks of images after the process of style transfer for the complete end piece.
翻译:任意样式传输是一种技术, 用来从两种图像中生成新图像: 内容图像和风格图像。 新生成的图像是不可见的, 并且是由算法本身生成的。 平衡结构和样式组件是其他最先进的算法试图解决的主要挑战。 尽管付出了一切努力, 应用最初在内容图像结构上方创造的艺术风格, 并同时保持一致性, 仍然是一个重大挑战 。 在这项工作中, 我们通过使用 Convolutional Nealal Nets 的深学习方法解决这些问题 。 我们的安装将首先从背景中提取背景的视野, 从内容图像中选取预培训的探测器 2 模型, 然后应用在 SANet 中使用的任意样式转换技术 。 一旦我们掌握了两个样式图像, 我们将会在样式传输过程之后将两块图像缝合成完整的末段 。