Recent feed-forward neural methods of arbitrary image style transfer mainly utilized encoded feature map upto its second-order statistics, i.e., linearly transformed the encoded feature map of a content image to have the same mean and variance (or covariance) of a target style feature map. In this work, we extend the second-order statistical feature matching into a general distribution matching based on the understanding that style of an image is represented by the distribution of responses from receptive fields. For this generalization, first, we propose a new feature transform layer that exactly matches the feature map distribution of content image into that of target style image. Second, we analyze the recent style losses consistent with our new feature transform layer to train a decoder network which generates a style transferred image from the transformed feature map. Based on our experimental results, it is proven that the stylized images obtained with our method are more similar with the target style images in all existing style measures without losing content clearness.
翻译:最近任意图像风格传输的传进式神经系统方法主要用于编码地貌图,直到其第二顺序统计,即线性转换内容图像的编码地貌图,使其具有目标样式特征图的相同平均值和差异(或共变)。在这项工作中,我们将第二顺序统计特征匹配扩展为一般分布匹配,所依据的理解是,一个图像的样式代表着从可接收字段中发送响应。关于这一概括化,首先,我们提议一个新的特性转换层,与内容图像的特征分布完全吻合,成为目标样式图图像。第二,我们分析与我们新的特征变异层相一致的最近风格损失,以训练一个解码器网络,从变换的特征图中生成一个风格。根据我们的实验结果,我们获得的Styl化图像与所有现有样式计量中的目标样式图像更加相似,但不会失去内容清晰度。