Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/7673
Title: Sparse Kernel Learning with LASSO and Its Bayesian Inference Algorithm
Contributor(s): Gao, Junbin (author); Kwan, Paul H  (author); Shi, Daming (author)
Publication Date: 2010
DOI: 10.1016/j.neunet.2009.07.001
Handle Link: https://hdl.handle.net/1959.11/7673
Abstract: Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers (Gao et al., 2008) and (Wang et al., 2007). This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages.
Publication Type: Journal Article
Source of Publication: Neural Networks, 23(2), p. 257-264
Publisher: Pergamon Press
Place of Publication: United Kingdom
ISSN: 1879-2782
0893-6080
Fields of Research (FoR) 2008: 080205 Numerical Computation
080201 Analysis of Algorithms and Complexity
080108 Neural, Evolutionary and Fuzzy Computation
Socio-Economic Objective (SEO) 2008: 890202 Application Tools and System Utilities
899999 Information and Communication Services not elsewhere classified
Peer Reviewed: Yes
HERDC Category Description: C1 Refereed Article in a Scholarly Journal
Appears in Collections:Journal Article

Files in This Item:
3 files
File Description SizeFormat 
Show full item record

SCOPUSTM   
Citations

109
checked on Apr 27, 2024

Page view(s)

1,126
checked on Apr 28, 2024
Google Media

Google ScholarTM

Check

Altmetric


Items in Research UNE are protected by copyright, with all rights reserved, unless otherwise indicated.