We present uSplit, a dedicated approach for trained image decomposition in the context of fluorescence microscopy images. We find that best results using regular deep architectures are achieved when large image patches are used during training, making memory consumption the limiting factor to further improving performance. We therefore introduce lateral contextualization (LC), a memory efficient way to train powerful networks and show that LC leads to consistent and significant improvements on the task at hand. We integrate LC with U-Nets, Hierarchical AEs, and Hierarchical VAEs, for which we formulate a modified ELBO loss. Additionally, LC enables training deeper hierarchical models than otherwise possible and, interestingly, helps to reduce tiling artefacts that are inherently impossible to avoid when using tiled VAE predictions. We apply uSplit to five decomposition tasks, one on a synthetic dataset, four others derived from real microscopy data. LC achieves SOTA results (average improvements to the best baseline of 2.36 dB PSNR), while simultaneously requiring considerably less GPU memory.
翻译:我们提出了一种专门针对荧光显微镜图像训练图像分解的方法——uSplit。我们发现,使用常规深度体系结构训练时,使用大图像块时能够获得最佳结果,这使得内存消耗成为进一步提高性能的限制因素。因此,我们引入了横向上下文化(LC),一种内存高效的训练强大网络的方法,并且展示了LC能够在当前任务上产生一致且显著的改进。我们将LC与U-Net、分层自编码器(HE)、分层变分自编码器(HVAE)结合起来,为此我们提出了一种修改的ELBO损失。此外,LC可以训练比原本更深的分层模型,并且有趣的是,能够帮助减少在使用切片式VAE预测时难以避免的平铺伪影。我们将uSplit应用于五个分解任务中,一个是基于合成数据集的,另外四个基于真实显微镜数据集的。LC实现了最佳结果(PSNR平均改进2.36 dB),同时需要较少的GPU内存。