Minimally invasive surgery (MIS) offers several advantages including minimum tissue injury and blood loss, and quick recovery time, however, it imposes some limitations on surgeons ability. Among others such as lack of tactile or haptic feedback, poor visualization of the surgical site is one of the most acknowledged factors that exhibits several surgical drawbacks including unintentional tissue damage. To the context of robot assisted surgery, lack of frame contextual details makes vision task challenging when it comes to tracking tissue and tools, segmenting scene, and estimating pose and depth. In MIS the acquired frames are compromised by different noises and get blurred caused by motions from different sources. Moreover, when underwater environment is considered for instance knee arthroscopy, mostly visible noises and blur effects are originated from the environment, poor control on illuminations and imaging conditions. Additionally, in MIS, procedure like automatic white balancing and transformation between the raw color information to its standard RGB color space are often absent due to the hardware miniaturization. There is a high demand of an online preprocessing framework that can circumvent these drawbacks. Our proposed method is able to restore a latent clean and sharp image in standard RGB color space from its noisy, blur and raw observation in a single preprocessing stage.
翻译:最小侵入性外科(MIS)具有若干优势,包括组织伤害和失血最小值,以及快速恢复时间等,但是,它给外科医生的能力带来了一些限制,其中包括缺乏触摸性或偶然反馈,外科站点的视觉化不良是发现包括无意组织损害在内的一些外科缺陷的最公认的因素之一。在机器人辅助外科手术方面,缺乏框架背景细节使得在跟踪组织和工具、截断场和估计面貌和深度时,难以进行视觉任务。在MIS中,获得的框架受到不同噪音的破坏,并且由于不同来源的动作而变得模糊不清。此外,当考虑水下环境时,大多数可见的噪音和模糊影响来自环境,对照明和成像条件的控制不力。此外,在IMIS中,原始颜色信息与标准RGB颜色空间之间的自动白平衡和转换等程序往往因硬件微缩化而缺乏。对在线预处理框架的需求很高,可以绕过这些倒。我们提出的方法能够恢复标准RGB空间的深层和低层图像在标准RGB空间中的潜层前期。