An approximation theory approach to learning with \(\ell^1\) regularization
From MaRDI portal
Publication:1944318
DOI10.1016/j.jat.2012.12.004zbMath1283.68308OpenAlexW1966805857MaRDI QIDQ1944318
Hong-Yan Wang, Quan-Wu Xiao, Ding-Xuan Zhou
Publication date: 5 April 2013
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jat.2012.12.004
learning theorymultivariate approximation\(\ell ^{1}\)-regularizerdata dependent hypothesis spacesdata dependent hypothesis space\(l^1\)-regularizer multivariate approximationkernel-based regularization scheme
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Multidimensional problems (41A63)
Related Items
Learning by atomic norm regularization with polynomial kernels, Unnamed Item, Unnamed Item, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning theory approach to a system identification problem involving atomic norm, Learning with Convex Loss and Indefinite Kernels, A simpler approach to coefficient regularized support vector machines regression, On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization, Stability analysis of learning algorithms for ontology similarity computation, Multikernel Regression with Sparsity Constraint