Optical colonoscopy (OC), the most prevalent colon cancer screening tool, has a high miss rate due to a number of factors, including the geometry of the colon (haustral fold and sharp bends occlusions), endoscopist inexperience or fatigue, endoscope field of view, etc. We present a framework to visualize the missed regions per-frame during the colonoscopy, and provides a workable clinical solution. Specifically, we make use of 3D reconstructed virtual colonoscopy (VC) data and the insight that VC and OC share the same underlying geometry but differ in color, texture and specular reflections, embedded in the OC domain. A lossy unpaired image-to-image translation model is introduced with enforced shared latent space for OC and VC. This shared latent space captures the geometric information while deferring the color, texture, and specular information creation to additional Gaussian noise input. This additional noise input can be utilized to generate one-to-many mappings from VC to OC and OC to OC. The code, data and trained models will be released via our Computational Endoscopy Platform at https://github.com/nadeemlab/CEP.
翻译:最普遍的结肠癌筛查工具OC,即最流行的结肠癌筛查工具OC,由于若干因素,其误差率很高,其中包括结肠的几何学(高音折折叠和尖弯弯分解)、不经验或疲劳、视视视视视外镜领域等。我们提出了一个框架,以可视化结肠镜期间缺失的区域,并提供可行的临床解决方案。具体地说,我们利用3D重建的虚拟结肠镜检查(VC)数据,以及VC和OC共享基本几何但颜色、纹理和视镜反射等不同部分的洞察力。引入了隐性不光学图像到影像翻译模型,为OC和VC提供共同的潜在空间。这种共享的潜在空间捕捉了几何信息,同时推迟颜色、文字和视觉信息生成更多的高音素输入。这种额外噪音投入可用于生成VC至OC至OC和Compeal OC 平台的一至many 映射模型。