Regularized ranking with convex losses and \(\ell^1\)-penalty
From MaRDI portal
Publication:2319268
DOI10.1155/2013/927827zbMath1470.68092OpenAlexW2089995060WikidataQ58917746 ScholiaQ58917746MaRDI QIDQ2319268
Publication date: 16 August 2019
Published in: Abstract and Applied Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1155/2013/927827
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (2)
Convergence of online pairwise regression learning with quadratic loss ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss
Cites Work
- Unnamed Item
- Unnamed Item
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Limit theorems for \(U\)-processes
- The convergence rate of a regularized ranking algorithm
- Fast rates for support vector machines using Gaussian kernels
- Analysis of support vector machines regression
- \(U\)-processes indexed by Vapnik-Červonenkis classes of functions with applications to asymptotics and bootstrap of \(U\)-statistics with estimated parameters
- Support vector machines regression with \(l^1\)-regularizer
- Learning rates for \(l^1\)-regularized kernel classifiers
- Ranking and empirical minimization of \(U\)-statistics
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Learning theory estimates via integral operators and their approximations
- Learning Theory
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Regularized ranking with convex losses and \(\ell^1\)-penalty