CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers. CVPR 2020 will take place at The Washington State Convention Center in Seattle, WA, from June 16 to June 20, 2020.


该文是清华大学&华为诺亚方舟联合提出的一种视频超分方案。在图像/视频质量改善领域,每年都会出现大量的paper,但真正值得深入研究的并不多。恰好该文是视频超分领域非常不错的文章之一,它在指标方面取得了媲美甚至优于EDVR的效果且并未使用形变卷积。所以这篇论文值得各位花点时间去了解一下。 视频超分旨在根据低分辨率视频生成高分辨率且更优视觉效果的视频,目前它引起了越来越多的关注。在这篇论文中,作者提出一种采用分层方式利用时序信息的方法。输入序列被分为多个组,不同组对应不同的帧率,这些组为参考帧重建遗失细节提供了互补信息,与此同时,还集成了注意力模块与组间融合模块。此外,作者还引入一种快速空域对齐以处理视频的大位移运动。


  • 提出一种新颖的神经网络,它可以通过帧率分组分层方式有效的融合空时信息;
  • 引入一种快速空域对齐方法处理大运动问题;
  • 所提方法在两个主流视频超分基准数据集上取得了SOTA性能



How to estimate the quality of the network output is an important issue, and currently there is no effective solution in the field of human parsing. In order to solve this problem, this work proposes a statistical method based on the output probability map to calculate the pixel quality information, which is called pixel score. In addition, the Quality-Aware Module (QAM) is proposed to fuse the different quality information, the purpose of which is to estimate the quality of human parsing results. We combine QAM with a concise and effective network design to propose Quality-Aware Network (QANet) for human parsing. Benefiting from the superiority of QAM and QANet, we achieve the best performance on three multiple and one single human parsing benchmarks, including CIHP, MHP-v2, Pascal-Person-Part and LIP. Without increasing the training and inference time, QAM improves the AP$^\text{r}$ criterion by more than 10 points in the multiple human parsing task. QAM can be extended to other tasks with good quality estimation, e.g. instance segmentation. Specifically, QAM improves Mask R-CNN by ~1% mAP on COCO and LVISv1.0 datasets. Based on the proposed QAM and QANet, our overall system wins 1st place in CVPR2019 COCO DensePose Challenge, and 1st place in Track 1 & 2 of CVPR2020 LIP Challenge. Code and models are available at