For natural image matting, context information plays a crucial role in estimating alpha mattes especially when it is challenging to distinguish foreground from its background. Exiting deep learning-based methods exploit specifically designed context aggregation modules to refine encoder features. However, the effectiveness of these modules has not been thoroughly explored. In this paper, we conduct extensive experiments to reveal that the context aggregation modules are actually not as effective as expected. We also demonstrate that when learned on large image patches, basic encoder-decoder networks with a larger receptive field can effectively aggregate context to achieve better performance.Upon the above findings, we propose a simple yet effective matting network, named AEMatter, which enlarges the receptive field by incorporating an appearance-enhanced axis-wise learning block into the encoder and adopting a hybrid-transformer decoder. Experimental results on four datasets demonstrate that our AEMatter significantly outperforms state-of-the-art matting methods (e.g., on the Adobe Composition-1K dataset, \textbf{25\%} and \textbf{40\%} reduction in terms of SAD and MSE, respectively, compared against MatteFormer). The code and model are available at \url{https://github.com/QLYoo/AEMatter}.
翻译:对于自然图像抠图,上下文信息在估计 alpha 蒙版时起着至关重要的作用,特别是在难以区分前景和背景时。现有的基于深度学习的方法利用专门设计的上下文聚合模块来优化编码器特征。然而,这些模块的有效性尚未得到彻底探索。在本文中,我们进行了大量实验,揭示了上下文聚合模块实际上没有预期的那么有效。我们还证明,在学习大型图像块时,具有更大感受野的基本编码器-解码器网络可以有效地聚合上下文以实现更好的性能。基于以上发现,我们提出了一种简单而有效的抠图网络,名为 AEMatter,它通过将外观增强的轴向学习块整合到编码器中,并采用混合-变换器解码器来扩大感受野。在四个数据集上的实验结果表明,我们的 AEMatter 显著优于现有的抠图方法(例如,在 Adobe Composition-1K 数据集上,与 MatteFormer 相比,SAD 和 MSE 分别降低了 25\% 和 40\%)。代码和模型可在 \url{https://github.com/QLYoo/AEMatter} 上获取。