Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).
翻译:光声成像学(PAT)具有以高空间分辨率恢复形态和功能组织特性的潜力,然而,以前试图通过监督的机器学习解决光学反向问题的努力由于缺乏标签参考数据而受到阻碍。虽然这一瓶颈是通过模拟培训数据来解决的,但真实图像和模拟图像之间的领域差距仍是一个尚未解决的挑战。我们提议对PAT图像合成采用新颖的方法,将产生貌似模拟形成两个脱节问题的挑战进行分解:(1) 现实组织形态的概率生成,(2) 以像素方式分配相应的光学和声学特性。前者是通过在语义上附加说明的医学成像数据方面受过训练的Genemental Aversarial Networks(GANs)实现的。根据对下游任务进行的一项鉴定研究,我们的方法产生的合成图像比传统的模型方法更现实,因此可能成为深学习的定量PAT(qPAT)的基本步骤。