We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR.
翻译:我们为超声波(美国)的超声波(超声波)图像展示了一个物理学增强的隐含神经表层(INR),该表层从重叠的美国扫瞄中学习组织特性。我们建议的方法利用了一种基于射线的神经素材,为美国的新观点合成提供了一种基于射线的神经素材。最近的出版物表明,美国超声波模型可以将一组二维的美国框架的三维场景的表示形式编码起来。然而,这些模型未能考虑美国成像本身外观和几何性的变化。在我们的工作中,我们讨论了场景中以方向为根据的变化,并表明由物理启发的显示能够改善美国图像合成的准确性。特别是,我们实验性地展示了我们所提议的方法,由于美国图像的视貌差异而产生几何精确的B-模式图像。我们用模拟的B-mode美国肝脏扫瞄镜进行实验,并获得美国用机器人臂跟踪的脊柱形象的扫描。实验证实,我们的方法生成了美国框架,使得美国图像的体积能够与先前的视觉合成观点相相相相相融合。我们所展示的最佳图像。