Detection Transformers represent end-to-end object detection approaches based on a Transformer encoder-decoder architecture, exploiting the attention mechanism for global relation modeling. Although Detection Transformers deliver results on par with or even superior to their highly optimized CNN-based counterparts operating on 2D natural images, their success is closely coupled to access to a vast amount of training data. This, however, restricts the feasibility of employing Detection Transformers in the medical domain, as access to annotated data is typically limited. To tackle this issue and facilitate the advent of medical Detection Transformers, we propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder. Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view to regions of interest, which allows for a precise focus on relevant anatomical structures. We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights. Code for Focused Decoder is available in our medical Vision Transformer library github.com/bwittmann/transoar.
翻译:检测变异器代表基于变异器编码器-解码器结构的端到端物体探测方法,利用全球关系模型的注意机制。虽然检测变异器与以2D自然图像运行的高度优化的CNN对口单位一样或甚至优于其高度优化的对口单位,但其成功与获取大量培训数据密切相关。然而,这限制了在医疗领域使用检测变异器的可行性,因为获取附加说明数据的机会通常有限。为了解决这一问题并促进医疗探测变异器的出现,我们建议为3D解剖结构检测(假称焦点解码器)提供一个新型的检测变异器。聚焦解码器利用一个解剖区域的信息同时部署查询锚并限制跨关注区域的观点领域,从而得以精确地关注相关的解剖结构。我们评估了我们关于两个公开提供的CT对流数据集的拟议方法,并表明聚焦解码器不仅提供强有力的检测结果,因此减轻了对大规模解剖结构结构结构检测结果的需求。 聚焦解码解析器/解析库中有大量的可解释性数据。