Recently, there has been an increasing concern about the privacy issue raised by using personally identifiable information in machine learning. However, previous portrait matting methods were all based on identifiable portrait images. To fill the gap, we present P3M-10k in this paper, which is the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting. P3M-10k consists of 10,000 high-resolution face-blurred portrait images along with high-quality alpha mattes. We systematically evaluate both trimap-free and trimap-based matting methods on P3M-10k and find that existing matting methods show different generalization capabilities when following the Privacy-Preserving Training (PPT) setting, i.e., training on face-blurred images and testing on arbitrary images. To devise a better trimap-free portrait matting model, we propose P3M-Net, which leverages the power of a unified framework for both semantic perception and detail matting, and specifically emphasizes the interaction between them and the encoder to facilitate the matting process. Extensive experiments on P3M-10k demonstrate that P3M-Net outperforms the state-of-the-art methods in terms of both objective metrics and subjective visual quality. Besides, it shows good generalization capacity under the PPT setting, confirming the value of P3M-10k for facilitating future research and enabling potential real-world applications. The source code and dataset are available at https://github.com/JizhiziLi/P3M
翻译:最近,人们日益关注在机器学习中使用个人可识别的信息所产生的隐私问题。然而,以往的肖像结配方法都是基于可识别的肖像图像。为了填补这一空白,我们在本文中介绍了P3M-10k,这是隐私-保护Portrait Matting的第一个大规模匿名基准。P3M-10k由10,000个高分辨率面粉碎肖像和高质量的阿尔法垫组成。我们系统地评估了P3M-10k上没有字形和基于字形的配对方法,发现现有的配对方法都基于可识别的肖像图像。为了填补这一空白,我们在本文中介绍了P3M-10k-10k的缩略图图像和任意图像测试。为了设计一个更好的无字形的肖像配对模型,我们建议P3M-Net利用一个统一的框架的力量来进行语义感知和细节配配对,我们特别强调它们与调源之间的相互作用,以便利交配进程。P3M-10k 用于P3M-10k的广域化实验,在P3M-10K3的图像测试中展示了可实现目标的能力。