Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/43331
Title: Multi-Object Segmentation in Complex Urban Scenes from High-Resolution Remote Sensing Data
Contributor(s): Abdollahi, Abolfazl (author); Pradhan, Biswajeet (author); Shukla, Nagesh (author); Chakraborty, Subrata  (author)orcid ; Alamri, Abdullah (author)
Publication Date: 2021-09-16
Open Access: Yes
DOI: 10.3390/rs13183710
Handle Link: https://hdl.handle.net/1959.11/43331
Abstract: 

Terrestrial features extraction, such as roads and buildings from aerial images using an automatic system, has many usages in an extensive range of fields, including disaster management, change detection, land cover assessment, and urban planning. This task is commonly tough because of complex scenes, such as urban scenes, where buildings and road objects are surrounded by shadows, vehicles, trees, etc., which appear in heterogeneous forms with lower inter-class and higher intra-class contrasts. Moreover, such extraction is time-consuming and expensive to perform by human specialists manually. Deep convolutional models have displayed considerable performance for feature segmentation from remote sensing data in the recent years. However, for the large and continuous area of obstructions, most of these techniques still cannot detect road and building well. Hence, this work's principal goal is to introduce two novel deep convolutional models based on UNet family for multi-object segmentation, such as roads and buildings from aerial imagery. We focused on buildings and road networks because these objects constitute a huge part of the urban areas. The presented models are called multi-level context gating UNet (MCG-UNet) and bi-directional ConvLSTM UNet model (BCL-UNet). The proposed methods have the same advantages as the UNet model, the mechanism of densely connected convolutions, bi-directional ConvLSTM, and squeeze and excitation module to produce the segmentation maps with a high resolution and maintain the boundary information even under complicated backgrounds. Additionally, we implemented a basic efficient loss function called boundary-aware loss (BAL) that allowed a network to concentrate on hard semantic segmentation regions, such as overlapping areas, small objects, sophisticated objects, and boundaries of objects, and produce high-quality segmentation maps. The presented networks were tested on the Massachusetts building and road datasets. The MCG-UNet improved the average F1 accuracy by 1.85%, and 1.19% and 6.67% and 5.11% compared with UNet and BCL-UNet for road and building extraction, respectively. Additionally, the presented MCG-UNet and BCL-UNet networks were compared with other state-of-the-art deep learning-based networks, and the results proved the superiority of the networks in multi-object segmentation tasks.

Publication Type: Journal Article
Source of Publication: Remote Sensing, 13(18), p. 1-22
Publisher: MDPI AG
Place of Publication: Switzerland
ISSN: 2072-4292
Fields of Research (FoR) 2020: 460106 Spatial data and applications
460306 Image processing
461103 Deep learning
Socio-Economic Objective (SEO) 2020: 280115 Expanding knowledge in the information and computing sciences
Peer Reviewed: Yes
HERDC Category Description: C1 Refereed Article in a Scholarly Journal
Appears in Collections:Journal Article
School of Science and Technology

Files in This Item:
2 files
File Description SizeFormat 
openpublished/MultiObjectChakraborty2021JournalArticle.pdfPublished version100.8 MBAdobe PDF
Download Adobe
View/Open
Show full item record
Google Media

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons