Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/61445
Title: An adaptive weighted fusion model with two subspaces for facial expression recognition
Contributor(s): Sun, Zhe (author); Hu, Zheng-ping (author); Chiong, Raymond  (author)orcid ; Wang, Meng (author); Zhao, Shuhuan (author)
Publication Date: 2018
DOI: 10.1007/s11760-017-1226-0
Handle Link: https://hdl.handle.net/1959.11/61445
Abstract: 

Automatic facial expression recognition has received considerable attention in the research areas of computer vision and pattern recognition. To achieve satisfactory accuracy, deriving a robust facial expression representation is especially important. In this paper, we present an adaptive weighted fusion model (AWFM), aiming to automatically determine optimal weighted values. The AWFM integrates two subspaces, i.e., unsupervised and supervised subspaces, to represent and classify query samples. The unsupervised subspace is formed by differentiated expression samples generated via an auxiliary neutral training set. The supervised subspace is obtained through the reconstruction of intra-class singular value decomposition based on low-rank decomposition from raw training data. Our experiments using three public facial expression datasets confirm that the proposed model can obtain better performance compared to conventional fusion methods as well as state-of-the-art methods from the literature.

Publication Type: Journal Article
Source of Publication: Signal, Image and Video Processing, v.12, p. 835-843
Publisher: Springer UK
Place of Publication: United Kingdom
ISSN: 1863-1711
1863-1703
Fields of Research (FoR) 2020: 4602 Artificial intelligence
Peer Reviewed: Yes
HERDC Category Description: C1 Refereed Article in a Scholarly Journal
Appears in Collections:Journal Article
School of Science and Technology

Files in This Item:
1 files
File SizeFormat 
Show full item record

SCOPUSTM   
Citations

7
checked on Jan 18, 2025
Google Media

Google ScholarTM

Check

Altmetric


Items in Research UNE are protected by copyright, with all rights reserved, unless otherwise indicated.