Previous studies show the necessity of global and local adjustment for image enhancement. However, existing convolutional neural networks (CNNs) and transformer-based models face great challenges in balancing the computational efficiency and effectiveness of global-local information usage. Especially, existing methods typically adopt the global-to-local fusion mode, ignoring the importance of bidirectional interactions. To address those issues, we propose a novel mutual guidance network (MGN) to perform effective bidirectional global-local information exchange while keeping a compact architecture. In our design, we adopt a two-branch framework where one branch focuses more on modeling global relations while the other is committed to processing local information. Then, we develop an efficient attention-based mutual guidance approach throughout our framework for bidirectional global-local interactions. As a result, both the global and local branches can enjoy the merits of mutual information aggregation. Besides, to further refine the results produced by our MGN, we propose a novel residual integration scheme following the divide-and-conquer philosophy. The extensive experiments demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance on several public image enhancement benchmarks.
翻译:以往的研究显示,有必要对全球和地方进行改善形象的调整。然而,现有的进化神经网络和变压器模型在平衡全球-当地信息使用的计算效率和有效性方面面临着巨大的挑战。特别是,现有方法通常采用全球到地方的融合模式,忽视双向互动的重要性。为了解决这些问题,我们提议建立一个新型的相互指导网络(MGN),以便在保持一个紧凑结构的同时进行有效的全球-地方双向双向信息交流。在我们的设计中,我们采用了一个双部门框架,其中一个部门更多地关注全球关系的建模,而另一个部门致力于处理当地信息。然后,我们在整个全球-地方双向互动框架内发展一种高效的、基于关注的相互指导方法。因此,全球和当地两个部门都可以享受到相互信息汇总的好处。此外,为了进一步完善我们的MGN产生的结果,我们提议了一个新的剩余整合计划,遵循分化理念。广泛的实验展示了我们拟议方法的有效性,该方法在几个公共图像提升基准上实现了状态的业绩。