Some image restoration tasks like demosaicing require difficult training samples to learn effective models. Existing methods attempt to address this data training problem by manually collecting a new training dataset that contains adequate hard samples, however, there are still hard and simple areas even within one single image. In this paper, we present a data-driven approach called PatchNet that learns to select the most useful patches from an image to construct a new training set instead of manual or random selection. We show that our simple idea automatically selects informative samples out from a large-scale dataset, leading to a surprising 2.35dB generalisation gain in terms of PSNR. In addition to its remarkable effectiveness, PatchNet is also resource-friendly as it is applied only during training and therefore does not require any additional computational cost during inference.
翻译:某些图像恢复任务,如解剖等,需要困难的培训样本才能学习有效的模型。 现有方法试图通过手工收集包含适当硬样本的新的培训数据集来解决数据培训问题,然而,即使在同一图像中,仍然有硬和简单的区域。 在本文中,我们介绍了一种数据驱动方法,即PatchNet,它学会从图像中选择最有用的补丁,以构建新的培训数据集,而不是人工或随机选择。我们显示,我们简单的想法自动从大型数据集中选择信息样本,从而在PSNR中实现令人惊讶的2.35dB普遍性增益。除了其显著的效果外,PatchNet也便于使用资源,因为它只在培训期间应用,因此在推断过程中不需要额外的计算成本。