Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/43331
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAbdollahi, Abolfazlen
dc.contributor.authorPradhan, Biswajeeten
dc.contributor.authorShukla, Nageshen
dc.contributor.authorChakraborty, Subrataen
dc.contributor.authorAlamri, Abdullahen
dc.date.accessioned2022-02-22T03:17:53Z-
dc.date.available2022-02-22T03:17:53Z-
dc.date.issued2021-09-16-
dc.identifier.citationRemote Sensing, 13(18), p. 1-22en
dc.identifier.issn2072-4292en
dc.identifier.urihttps://hdl.handle.net/1959.11/43331-
dc.description.abstract<p>Terrestrial features extraction, such as roads and buildings from aerial images using an automatic system, has many usages in an extensive range of fields, including disaster management, change detection, land cover assessment, and urban planning. This task is commonly tough because of complex scenes, such as urban scenes, where buildings and road objects are surrounded by shadows, vehicles, trees, etc., which appear in heterogeneous forms with lower inter-class and higher intra-class contrasts. Moreover, such extraction is time-consuming and expensive to perform by human specialists manually. Deep convolutional models have displayed considerable performance for feature segmentation from remote sensing data in the recent years. However, for the large and continuous area of obstructions, most of these techniques still cannot detect road and building well. Hence, this work's principal goal is to introduce two novel deep convolutional models based on UNet family for multi-object segmentation, such as roads and buildings from aerial imagery. We focused on buildings and road networks because these objects constitute a huge part of the urban areas. The presented models are called multi-level context gating UNet (MCG-UNet) and bi-directional ConvLSTM UNet model (BCL-UNet). The proposed methods have the same advantages as the UNet model, the mechanism of densely connected convolutions, bi-directional ConvLSTM, and squeeze and excitation module to produce the segmentation maps with a high resolution and maintain the boundary information even under complicated backgrounds. Additionally, we implemented a basic efficient loss function called boundary-aware loss (BAL) that allowed a network to concentrate on hard semantic segmentation regions, such as overlapping areas, small objects, sophisticated objects, and boundaries of objects, and produce high-quality segmentation maps. The presented networks were tested on the Massachusetts building and road datasets. The MCG-UNet improved the average F1 accuracy by 1.85%, and 1.19% and 6.67% and 5.11% compared with UNet and BCL-UNet for road and building extraction, respectively. Additionally, the presented MCG-UNet and BCL-UNet networks were compared with other state-of-the-art deep learning-based networks, and the results proved the superiority of the networks in multi-object segmentation tasks.</p>en
dc.languageenen
dc.publisherMDPI AGen
dc.relation.ispartofRemote Sensingen
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleMulti-Object Segmentation in Complex Urban Scenes from High-Resolution Remote Sensing Dataen
dc.typeJournal Articleen
dc.identifier.doi10.3390/rs13183710en
dcterms.accessRightsUNE Greenen
local.contributor.firstnameAbolfazlen
local.contributor.firstnameBiswajeeten
local.contributor.firstnameNageshen
local.contributor.firstnameSubrataen
local.contributor.firstnameAbdullahen
local.profile.schoolSchool of Science and Technologyen
local.profile.emailschakra3@une.edu.auen
local.output.categoryC1en
local.record.placeauen
local.record.institutionUniversity of New Englanden
local.publisher.placeSwitzerlanden
local.identifier.runningnumber3710en
local.format.startpage1en
local.format.endpage22en
local.identifier.scopusid85115222755en
local.peerreviewedYesen
local.identifier.volume13en
local.identifier.issue18en
local.access.fulltextYesen
local.contributor.lastnameAbdollahien
local.contributor.lastnamePradhanen
local.contributor.lastnameShuklaen
local.contributor.lastnameChakrabortyen
local.contributor.lastnameAlamrien
dc.identifier.staffune-id:schakra3en
local.profile.orcid0000-0002-0102-5424en
local.profile.roleauthoren
local.profile.roleauthoren
local.profile.roleauthoren
local.profile.roleauthoren
local.profile.roleauthoren
local.identifier.unepublicationidune:1959.11/43331en
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
local.title.maintitleMulti-Object Segmentation in Complex Urban Scenes from High-Resolution Remote Sensing Dataen
local.relation.fundingsourcenoteThis research is supported by the Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney (UTS). This work is also in part supported by the Researchers Supporting Project, King Saud University, Riyadh, Saudi Arabia, under Project RSP-2021/14.en
local.output.categorydescriptionC1 Refereed Article in a Scholarly Journalen
local.search.authorAbdollahi, Abolfazlen
local.search.authorPradhan, Biswajeeten
local.search.authorShukla, Nageshen
local.search.authorChakraborty, Subrataen
local.search.authorAlamri, Abdullahen
local.open.fileurlhttps://rune.une.edu.au/web/retrieve/20635a87-68f8-42a0-99d8-738b2e1d24dcen
local.uneassociationNoen
local.atsiresearchNoen
local.sensitive.culturalNoen
local.year.published2021en
local.fileurl.openhttps://rune.une.edu.au/web/retrieve/20635a87-68f8-42a0-99d8-738b2e1d24dcen
local.fileurl.openpublishedhttps://rune.une.edu.au/web/retrieve/20635a87-68f8-42a0-99d8-738b2e1d24dcen
local.subject.for2020460106 Spatial data and applicationsen
local.subject.for2020460306 Image processingen
local.subject.for2020461103 Deep learningen
local.subject.seo2020280115 Expanding knowledge in the information and computing sciencesen
Appears in Collections:Journal Article
School of Science and Technology
Files in This Item:
2 files
File Description SizeFormat 
openpublished/MultiObjectChakraborty2021JournalArticle.pdfPublished version100.8 MBAdobe PDF
Download Adobe
View/Open
Show simple item record

SCOPUSTM   
Citations

37
checked on Oct 26, 2024

Page view(s)

906
checked on Mar 9, 2023

Download(s)

4
checked on Mar 9, 2023
Google Media

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons