Attention has been proved to be an efficient mechanism to capture long-range dependencies. However, so far it has not been deployed in invertible networks. This is due to the fact that in order to make a network invertible, every component within the network needs to be a bijective transformation, but a normal attention block is not. In this paper, we propose invertible attention that can be plugged into existing invertible models. We mathematically and experimentally prove that the invertibility of an attention model can be achieved by carefully constraining its Lipschitz constant. We validate the invertibility of our invertible attention on image reconstruction task with 3 popular datasets: CIFAR-10, SVHN, and CelebA. We also show that our invertible attention achieves similar performance in comparison with normal non-invertible attention on dense prediction tasks.
翻译:事实证明,关注是捕捉长距离依赖性的有效机制,然而,迄今为止,它还没有被置于不可置疑的网络中,这是因为,为了使网络的每一个组成部分都不可忽略,网络中的每一个组成部分都需要是两面的转变,但正常的注意区却不是。在本文中,我们提出了可以插入现有不可忽略模型的不可忽略的注意点。我们在数学上和实验上证明,通过仔细限制Lipschitz常数,就可以实现注意模式的可忽略性。我们用三种流行数据集,即CIFAR-10、SVHN和CeebebA,来验证我们对图像重建任务的不可忽略性注意。我们还表明,我们不可忽视的注意与对密集预测任务的正常、不可忽略的注意相近。