Photoacoustic imaging has the potential to revolutionise healthcare due to the valuable information on tissue physiology that is contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate the interpretability of recorded images. Manually annotated multispectral photoacoustic imaging data are used as gold standard reference annotations and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data of healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualisations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a processing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
翻译:多光谱光声学测量中含有关于组织生理学的宝贵信息,因此,光声学成像具有革命性潜力,因为多光谱光学测量中含有关于组织生理学的宝贵信息。这种技术的临床翻译需要将高维获得的数据转换成与临床相关和可解释的信息。在这项工作中,我们提出了一种深层次的基于学习的多光谱光声学图像的语系分解法,以便利对所记录图像的可解释性。人工附加说明的多谱光谱光声学成像数据被用作黄金标准参考说明,并能够以监督的方式对深深层次的基于学习的分解算法进行培训。根据对人体健康志愿者的实验性数据进行的一项鉴定研究,我们表明,可以使用自动组织分解法对多谱光谱光声学图像进行有力的分析和直观化。由于高光谱信息的直观表现,这种处理算法可以成为便利光声学成像学临床翻译的宝贵手段。