How to estimate the quality of the network output is an important issue, and currently there is no effective solution in the field of human parsing. In order to solve this problem, this work proposes a statistical method based on the output probability map to calculate the pixel quality information, which is called pixel score. In addition, the Quality-Aware Module (QAM) is proposed to fuse the different quality information, the purpose of which is to estimate the quality of human parsing results. We combine QAM with a concise and effective network design to propose Quality-Aware Network (QANet) for human parsing. Benefiting from the superiority of QAM and QANet, we achieve the best performance on three multiple and one single human parsing benchmarks, including CIHP, MHP-v2, Pascal-Person-Part and LIP. Without increasing the training and inference time, QAM improves the AP$^\text{r}$ criterion by more than 10 points in the multiple human parsing task. QAM can be extended to other tasks with good quality estimation, e.g. instance segmentation. Specifically, QAM improves Mask R-CNN by ~1% mAP on COCO and LVISv1.0 datasets. Based on the proposed QAM and QANet, our overall system wins 1st place in CVPR2019 COCO DensePose Challenge, and 1st place in Track 1 & 2 of CVPR2020 LIP Challenge. Code and models are available at https://github.com/soeaver/QANet.
翻译:如何估计网络输出质量是一个重要问题,目前人类剖析领域没有有效的解决方案。为了解决这一问题,这项工作提议了一个基于输出概率图的统计方法,以计算像素质量信息,称为像素分。此外,还提议质量软件模块(QAM)整合不同质量信息,目的是评估人类剖析结果的质量。我们将QAM与一个简明有效的网络设计结合起来,为人类剖析提出质量软件网络(QANet) 。从QAM和QANet的优越性中获益,我们在三个多重和单一的人类分解基准上取得最佳业绩,包括CIHP、MHP-V2、Pascal-Person-Part和LIP。在不增加培训和推导时间的情况下,QAM在多个人类剖析任务中将AP$@text{r} 标准改进超过10个点。QAM可扩展至在1 CQOOA 和1 RQA 轨道上的其他任务,在1 IMA 和1 IMIS 数据库中,在1 IMA 上,在1 和 IMA 具体数据部分中,在1 改进了 CAM 。