Semantic segmentation for extracting buildings and roads, from unmanned aerial vehicle (UAV) remote sensing images by deep learning becomes a more efficient and convenient method than traditional manual segmentation in surveying and mapping field. In order to make the model lightweight and improve the model accuracy, A Lightweight and Efficient Network implemented using Dual Context modules (LEDCNet) for Buildings and Roads from UAV Aerial Remote Sensing Images is proposed. The proposed network adopts an encoder-decoder architecture in which a Lightweight Densely Connected Network (LDCNet) is developed as the encoder. In the decoder part, the dual multi-scale context modules which consist of the Atrous Spatial Pyramid Pooling module (ASPP) and the Object Contextual Representation module (OCR) are designed to capture more context information from feature maps of UAV remote sensing images. Between ASPP and OCR, a Feature Pyramid Network (FPN) module is used to and fuse multi-scale features extracting from ASPP. A private dataset of remote sensing images taken by UAV which contains 2431 training sets, 945 validation sets, and 475 test sets is constructed. The proposed model performs well on this dataset, with only 1.4M parameters and 5.48G floating-point operations (FLOPs), achieving an mean intersection-over-union ratio (mIoU) of 71.12%. More extensive experiments on the public LoveDA dataset and CITY-OSM dataset to further verify the effectiveness of the proposed model with excellent results on mIoU of 65.27% and 74.39%, respectively. The source code will be made available on https://github.com/GtLinyer/LEDCNet .
翻译:从无人驾驶飞行器(UAV)深层学习遥感图像中提取建筑物和道路的语义分割法,从无人驾驶飞行器(UAV)遥感图像中提取建筑物和道路的语义分割法,比传统的勘测和绘图场手工分割法更高效和方便。为了使模型轻量分量,并改进模型准确性,提议在UAV空中遥感图像中为建筑物和道路使用双环境模块(LEDCNet)实施轻量级数字分解器(UAVA),从深层学习的无人驾驶飞行器遥感图像网络(LDCNet)中发展出一个轻量级高度连通的精度连线网络(LDCNet)作为编码器。在解码部分,由AVAVL(ASPP)集合模块和天体背景显示的双级双级多级环境环境比例校校校校校模模块(ASTM)和天体背景显示UAVLM(OFL)图图图集图象图象图象图象图象图象图象图集中,945的校准和MDM测试数据集将分别用来在SlBSDM上进行。