Deep convolutional neural networks (CNNs) currently achieve state-of-the-art performance in denoising videos. They are typically trained with supervision, minimizing the error between the network output and ground-truth clean videos. However, in many applications, such as microscopy, noiseless videos are not available. To address these cases, we build on recent advances in unsupervised still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD). UDVD is shown to perform competitively with current state-of-the-art supervised methods on benchmark datasets, even when trained only on a single short noisy video sequence. Experiments on fluorescence-microscopy and electron-microscopy data illustrate the promise of our approach for imaging modalities where ground-truth clean data is generally not available. In addition, we study the mechanisms used by trained CNNs to perform video denoising. An analysis of the gradient of the network output with respect to its input reveals that these networks perform spatio-temporal filtering that is adapted to the particular spatial structures and motion of the underlying content. We interpret this as an implicit and highly effective form of motion compensation, a widely used paradigm in traditional video denoising, compression, and analysis. Code and iPython notebooks for our analysis are available in https://sreyas-mohan.github.io/udvd/ .
翻译:目前,深相神经网络(CNNs)在解密视频中达到最新水平。它们通常经过监督培训,最大限度地减少网络输出与地面实况清洁视频之间的错误。然而,在许多应用软件中,例如显微镜,没有无噪音视频;为了处理这些案件,我们利用未经监督的、仍然无法摄像的图像脱色的最新进展,开发了一个不受监督的深相Denoiser(UDVD)。UDVD显示,在基准数据集方面,目前采用最先进的、最先进的监督方法,具有竞争力,即使只对一个短短的扰动视频序列进行了培训。在荧光镜-显微镜和电子显微镜数据实验中,显示了我们在一般无法获得地面实况清洁数据的情况下,对成像模型方法的前景前景的前景前景的前景前景。此外,我们研究了受过训练的CNNIS用来进行视频脱色(UDD)的机制。对网络输出与其输入有关的梯度的分析显示,这些网络进行着快速的脉冲过滤,而这种过滤是适应特定空间-脉冲结构的,是用来进行隐含式的图像分析,并且对基础内容进行高度压缩分析。