Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/61409
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSun, Zheen
dc.contributor.authorChiong, Raymonden
dc.contributor.authorHu, Zhengpingen
dc.contributor.authorLi, Shufangen
dc.date.accessioned2024-07-10T01:02:00Z-
dc.date.available2024-07-10T01:02:00Z-
dc.date.issued2020-
dc.identifier.citationSignal, Image and Video Processing, 14(3), p. 437-444en
dc.identifier.issn1863-1711en
dc.identifier.issn1863-1703en
dc.identifier.urihttps://hdl.handle.net/1959.11/61409-
dc.description.abstract<p>Recent research has shown that the deep subspace learning (DSL) method can extract high-level features and better represent abstract semantics of data for facial expression recognition. While significant advances have been made in this area, traditional sparse representation classifiers or collaborative representation classifiers are still predominantly used for classification purposes. In this paper, we propose a two-phase representation classifier (TPRC)-driven DSL model for robust facial expression recognition. First, the DSL-based principal component analysis network is used to extract high-level features of training and query samples. Then, the proposed TPRC uses the Euclidean distance as a measure to determine the optimal training sample features (TSFs) for the query sample feature (QSF). Finally, the TPRC represents the QSF as a linear combination of all optimal TSFs and uses the representation result to perform classification. Experiments based on several benchmark datasets confirm that the proposed model exhibits highly competitive performance.</p>en
dc.languageenen
dc.publisherSpringer UKen
dc.relation.ispartofSignal, Image and Video Processingen
dc.titleDeep subspace learning for expression recognition driven by a two-phase representation classifieren
dc.typeJournal Articleen
dc.identifier.doi10.1007/s11760-019-01568-4en
local.contributor.firstnameZheen
local.contributor.firstnameRaymonden
local.contributor.firstnameZhengpingen
local.contributor.firstnameShufangen
local.profile.schoolSchool of Science & Technologyen
local.profile.emailrchiong@une.edu.auen
local.output.categoryC1en
local.record.placeauen
local.record.institutionUniversity of New Englanden
local.publisher.placeUnited Kingdomen
local.format.startpage437en
local.format.endpage444en
local.peerreviewedYesen
local.identifier.volume14en
local.identifier.issue3en
local.contributor.lastnameSunen
local.contributor.lastnameChiongen
local.contributor.lastnameHuen
local.contributor.lastnameLien
dc.identifier.staffune-id:rchiongen
local.profile.orcid0000-0002-8285-1903en
local.profile.roleauthoren
local.profile.roleauthoren
local.profile.roleauthoren
local.profile.roleauthoren
local.identifier.unepublicationidune:1959.11/61409en
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
dc.identifier.academiclevelAcademicen
local.title.maintitleDeep subspace learning for expression recognition driven by a two-phase representation classifieren
local.output.categorydescriptionC1 Refereed Article in a Scholarly Journalen
local.search.authorSun, Zheen
local.search.authorChiong, Raymonden
local.search.authorHu, Zhengpingen
local.search.authorLi, Shufangen
local.uneassociationNoen
dc.date.presented2020-
local.atsiresearchNoen
local.sensitive.culturalNoen
local.year.published2020en
local.year.presented2020en
local.fileurl.closedpublishedhttps://rune.une.edu.au/web/retrieve/0a7c3eff-6dda-4562-9bfb-88834d48a834en
local.subject.for20204602 Artificial intelligenceen
local.profile.affiliationtypeExternal Affiliationen
local.profile.affiliationtypeExternal Affiliationen
local.profile.affiliationtypeExternal Affiliationen
local.profile.affiliationtypeExternal Affiliationen
local.date.moved2024-07-24en
Appears in Collections:Journal Article
School of Science and Technology
Files in This Item:
1 files
File SizeFormat 
Show simple item record

SCOPUSTM   
Citations

3
checked on Jan 18, 2025
Google Media

Google ScholarTM

Check

Altmetric


Items in Research UNE are protected by copyright, with all rights reserved, unless otherwise indicated.