Phase aberration is one of the primary sources of image quality degradation in ultrasound, which is induced by spatial variations in sound speed across the heterogeneous medium. This effect disrupts transmitted waves and prevents coherent summation of echo signals, resulting in suboptimal image quality. In real experiments, obtaining non-aberrated ground truths can be extremely challenging, if not infeasible. It hinders the performance of deep learning-based phase aberration correction techniques due to sole reliance on simulated data and the presence of domain shift between simulated and experimental data. Here, for the first time, we propose a deep learning-based method that does not require reference data to compensate for the phase aberration effect. We train a network wherein both input and target output are randomly aberrated radio frequency (RF) data. Moreover, we demonstrate that a conventional loss function such as mean square error is inadequate for training the network to achieve optimal performance. Instead, we propose an adaptive mixed loss function that employs both B-mode and RF data, resulting in more efficient convergence and enhanced performance. Source code is available at \url{http://code.sonography.ai}.
翻译:相位偏差是超声波图像质量退化的主要来源之一,这是由不同介质之间声音速度的空间变化引发的。这种影响扰乱了传送波,防止了对回声信号的一致比对,导致图像质量低于最佳水平。在实际实验中,获得无偏差的地面真相可能极具挑战性,如果不是不可行的话。由于仅仅依靠模拟数据以及模拟数据和实验数据之间存在域转移,从而阻碍了深学习阶段偏差校正技术的性能。我们在此首次提出了一种不要求参考数据的深层次学习方法,以弥补相位偏差效应。我们培训了一个输入和目标输出都是随机偏差无线电频率数据的网络。此外,我们证明,像中方错误这样的常规损失功能不足以训练网络达到最佳性能。相反,我们提议采用适应性混合损失功能,既使用B-mode数据,又使用RF数据,从而产生更高效的趋同和增强性能。源码可以在\urlhttp://codection.songraphy.}。</s>