Accurate segmentation of retinal fluids in 3D Optical Coherence Tomography images is key for diagnosis and personalized treatment of eye diseases. While deep learning has been successful at this task, trained supervised models often fail for images that do not resemble labeled examples, e.g. for images acquired using different devices. We hereby propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains. We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D. In addition, we propose channel-wise aggregation as an alternative to conventional spatial-pooling aggregation for contrastive feature map projection. We evaluate our methods for domain adaptation from a (labeled) source domain to an (unlabeled) target domain, each containing images acquired with different acquisition devices. In the target domain, our method achieves a Dice coefficient 13.8% higher than SimCLR (a state-of-the-art contrastive framework), and leads to results comparable to an upper bound with supervised training in that domain. In the source domain, our model also improves the results by 5.4% Dice, by successfully leveraging information from many unlabeled images.
翻译:3D 光学一致性成像图像中视网膜液的准确分解是诊断和对眼病进行个性化治疗的关键。 虽然深层学习在这项任务中取得了成功, 受过训练的监督模型对于与标签示例不相像的图像往往失败, 例如, 对于使用不同设备获取的图像。 我们在此提出一个新的半监督的学习框架, 用于从新的无标签域分割体积图像。 我们共同使用监督和对比的学习方法, 并引入一种对比性配对方案, 利用附近切片在 3D 中的相似点。 此外, 我们提议以频道为介质汇总替代传统的空间集合组合, 以进行对比性地貌映射。 我们从一个( 标签的) 源域到一个( 未标签的) 目标域, 我们评估了我们的域适应方法, 每个区域都包含以不同获取装置获得的图像。 在目标域, 我们的方法比SimCLR( 状态对比框架) 高出13.8%, 并导致结果与该域内受监督的培训相似。 在源域中, 我们的许多模型也通过Dlab 成功的图像改进结果。