Many videos contain flickering artifacts. Common causes of flicker include video processing algorithms, video generation algorithms, and capturing videos under specific situations. Prior work usually requires specific guidance such as the flickering frequency, manual annotations, or extra consistent videos to remove the flicker. In this work, we propose a general flicker removal framework that only receives a single flickering video as input without additional guidance. Since it is blind to a specific flickering type or guidance, we name this "blind deflickering." The core of our approach is utilizing the neural atlas in cooperation with a neural filtering strategy. The neural atlas is a unified representation for all frames in a video that provides temporal consistency guidance but is flawed in many cases. To this end, a neural network is trained to mimic a filter to learn the consistent features (e.g., color, brightness) and avoid introducing the artifacts in the atlas. To validate our method, we construct a dataset that contains diverse real-world flickering videos. Extensive experiments show that our method achieves satisfying deflickering performance and even outperforms baselines that use extra guidance on a public benchmark.
翻译:许多视频包含闪烁的工艺品。 闪烁的常见原因包括视频处理算法、 视频生成算法和在特定情况下捕捉视频。 先前的工作通常需要特定的指导, 如闪烁频率、 手动说明或额外的一致视频来删除闪烁器。 在此工作中, 我们提议了一个一般的闪烁清除框架, 仅接收一个闪烁的视频作为输入, 而无需额外的指导。 由于它无法使用特定的闪烁类型或指导, 我们命名它为“ 闪烁 ” 。 我们的方法的核心是使用神经图集, 与神经过滤战略合作。 神经图集是一个提供时间一致性指导但在许多情况下存在缺陷的视频中所有框架的统一代表。 为此, 一个神经网络受过训练, 以模拟过滤器来学习一致的特性( 例如, 颜色, 亮度), 并避免在地图集中引入文物 。 为了验证我们的方法, 我们构建了一个包含多种真实闪烁的视频的数据集。 广泛的实验显示我们的方法能够满足去滑动性, 甚至超越基线, 在公共基准上使用额外的指导 。</s>