We present uSplit, a dedicated approach for trained image decomposition in the context of fluorescence microscopy images. We find that best results using regular deep architectures are achieved when large image patches are used during training, making memory consumption the limiting factor to further improving performance. We therefore introduce lateral contextualization (LC), a memory efficient way to train powerful networks and show that LC leads to consistent and significant improvements on the task at hand. We integrate LC with U-Nets, Hierarchical AEs, and Hierarchical VAEs, for which we formulate a modified ELBO loss. Additionally, LC enables training deeper hierarchical models than otherwise possible and, interestingly, helps to reduce tiling artefacts that are inherently impossible to avoid when using tiled VAE predictions. We apply uSplit to five decomposition tasks, one on a synthetic dataset, four others derived from real microscopy data. LC achieves SOTA results (average improvements to the best baseline of 2.36 dB PSNR), while simultaneously requiring considerably less GPU memory.
翻译:我们介绍了在荧光显微镜图像方面经过培训的图像分解专门方法USplit。我们发现,在培训中使用大型图像补丁时,使用常规深层结构取得最佳效果,使记忆消耗成为进一步提高性能的限制因素。因此,我们引入了横向背景化(LC),这是一种培训强大网络的记忆高效方法,表明LC导致在手头任务上取得一致和显著的改进。我们将LC与U-Net、等级式AE和等级式VAE相结合,为此我们制定了经修改的ELBO损失。此外,LC能够培训比其他可能更深的等级模型,并且令人感兴趣的是,帮助减少在使用高压VAE预测时必然无法避免的瓷器。我们将USplit应用于五个分解任务,一个是合成数据集,另外四个来自真实的显微镜数据。LC实现了SOTA的结果(平均改进了2.36 dSNR的最佳基线),同时要求更少的GPU记忆。</s>