This paper tackles a new problem setting: reinforcement learning with pixel-wise rewards (pixelRL) for image processing. After the introduction of the deep Q-network, deep RL has been achieving great success. However, the applications of deep RL for image processing are still limited. Therefore, we extend deep RL to pixelRL for various image processing applications. In pixelRL, each pixel has an agent, and the agent changes the pixel value by taking an action. We also propose an effective learning method for pixelRL that significantly improves the performance by considering not only the future states of the own pixel but also those of the neighbor pixels. The proposed method can be applied to some image processing tasks that require pixel-wise manipulations, where deep RL has never been applied. We apply the proposed method to three image processing tasks: image denoising, image restoration, and local color enhancement. Our experimental results demonstrate that the proposed method achieves comparable or better performance, compared with the state-of-the-art methods based on supervised learning.
翻译:本文处理一个新的问题设置 : 用像素- 方法强化图像处理的像素- 奖赏( 像素- RL) 学习 。 在引入深Q- 网络后, 深 RL 取得了巨大成功 。 但是, 深 RL 用于图像处理的应用仍然有限 。 因此, 我们将深 RL 扩大到像素- RL 用于各种图像处理应用程序。 在像素- RL 中, 每个像素都有一个代理, 代理人通过采取行动来改变像素值 。 我们还提出了像素- 有效的像素- 学习方法, 不仅考虑自己的像素的未来状态, 而且还考虑相邻像素的类似点, 从而大大改进了性能 。 所提议的方法可以应用到某些需要像素- 明智操作的图像处理任务, 在那里从未应用过深 RL 。 我们对三个图像处理任务应用了拟议方法 : 图像解析、 图像恢复和本地色彩增强。 我们的实验结果显示, 与基于监督学习的状态方法相比, 取得相似性或更好的效果。