Scanning Transmission Electron Microscopes (STEMs) acquire 2D images of a 3D sample on the scale of individual cell components. Unfortunately, these 2D images can be too noisy to be fused into a useful 3D structure and facilitating good denoisers is challenging due to the lack of clean-noisy pairs. Additionally, representing a detailed 3D structure can be difficult even for clean data when using regular 3D grids. Addressing these two limitations, we suggest a differentiable image formation model for STEM, allowing to learn a joint model of 2D sensor noise in STEM together with an implicit 3D model. We show, that the combination of these models are able to successfully disentangle 3D signal and noise without supervision and outperform at the same time several baselines on synthetic and real data.
翻译:扫描传输电子显微镜(STEMs)获得3D样本的2D图像,其规模是单个细胞组件的规模。不幸的是,这些2D图像可能过于吵闹,无法融入一个有用的3D结构,而且由于缺乏清洁的调味配对,促进良好的食水器也具有挑战性。此外,在使用正常的3D电网时,即使对清洁数据来说,代表一个详细的3D结构也可能很困难。针对这两个局限性,我们建议为STEM建立一个不同的图像形成模型,以便能够在STEM中学习2D传感器噪音和隐含的3D模型的联合模型。我们表明,这些模型的结合能够成功地解开3D信号和噪音,而无需监督和同时超越合成和真实数据的若干基线。