A Transformer-based deep direct sampling method is proposed for electrical impedance tomography, a well-known severely ill-posed nonlinear boundary value inverse problem. A real-time reconstruction is achieved by evaluating the learned inverse operator between carefully designed data and the reconstructed images. An effort is made to give a specific example to a fundamental question: whether and how one can benefit from the theoretical structure of a mathematical problem to develop task-oriented and structure-conforming deep neural networks? Specifically, inspired by direct sampling methods for inverse problems, the 1D boundary data in different frequencies are preprocessed by a partial differential equation-based feature map to yield 2D harmonic extensions as different input channels. Then, by introducing learnable non-local kernels, the direct sampling is recast to a modified attention mechanism. The new method achieves superior accuracy over its predecessors and contemporary operator learners and shows robustness to noises in benchmarks. This research shall strengthen the insights that, despite being invented for natural language processing tasks, the attention mechanism offers great flexibility to be modified in conformity with the a priori mathematical knowledge, which ultimately leads to the design of more physics-compatible neural architectures.
翻译:以变压器为基础的深度直接取样方法,是针对电阻阻断层成像法提出的,这是一种众所周知的极为不完善的非线性边界值反向问题。通过对精心设计的数据和重新改造的图像之间所学的反向操作器进行评估,实现了实时重建。努力给一个根本问题提供一个具体的例子:是否和如何从数学问题的理论结构中获益,以发展面向任务和结构的深层神经网络?具体地说,在对反向问题直接取样方法的启发下,不同频率的1D边界数据通过一个部分差异方程特征图进行预处理,产生2D相容扩展作为不同的输入渠道。随后,通过引入可学习的非本地内核,直接取样被重新输入到一个经过修改的注意机制。新方法比其前身和当代操作者学习者更加精准,并显示基准中噪音的稳健性。这一研究将加强洞察力,即尽管是针对自然语言处理任务发明的,但关注机制提供了很大的灵活性,以便按照先前的数学知识加以修改,最终导致设计更符合物理可比较的建筑结构。</s>