An efficient deep learning model that can be implemented in real-time for polyp detection is crucial to reducing polyp miss-rate during screening procedures. Convolutional neural networks (CNNs) are vulnerable to small changes in the input image. A CNN-based model may miss the same polyp appearing in a series of consecutive frames and produce unsubtle detection output due to changes in camera pose, lighting condition, light reflection, etc. In this study, we attempt to tackle this problem by integrating temporal information among neighboring frames. We propose an efficient feature concatenation method for a CNN-based encoder-decoder model without adding complexity to the model. The proposed method incorporates extracted feature maps of previous frames to detect polyps in the current frame. The experimental results demonstrate that the proposed method of feature concatenation improves the overall performance of automatic polyp detection in videos. The following results are obtained on a public video dataset: sensitivity 90.94\%, precision 90.53\%, and specificity 92.46%
翻译:高效的深入学习模式,可以实时用于聚苯乙烯检测,对于在筛查程序期间减少聚苯乙烯误差率至关重要。进化神经网络(CNN)容易受到输入图像小幅变化的影响。基于CNN的模型可能错过一系列连续框架出现的同一聚苯乙烯,并由于相机外形、照明状况、光反射等变化而产生无能的检测输出。在本研究中,我们试图通过将相邻框架之间的时间信息整合在一起来解决这一问题。我们为基于CNN的编码器-解码器模型提出了一个高效的集成方法,但不增加模型的复杂度。拟议方法包括了以前框架的提取特征图,以探测当前框架中的聚苯乙烯。实验结果显示,拟议的特征组合方法提高了视频自动聚苯乙烯探测的总体性能。以下结果通过公共视频数据集获得:敏感度90.94<unk> 、精确度90.53<unk> 和具体度92.46%。</s>