scientific article; zbMATH DE number 7255051
From MaRDI portal
Publication:4969055
zbMath1499.62138MaRDI QIDQ4969055
Publication date: 5 October 2020
Full work available at URL: https://jmlr.csail.mit.edu/papers/v21/19-083.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (3)
Decentralized learning over a network with Nyström approximation using SGD ⋮ Kernel conjugate gradient methods with random projections ⋮ Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Random design analysis of ridge regression
- Functional data analysis of juggling trajectories: rejoinder
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Optimal rates for regularization of statistical inverse learning problems
- User-friendly tail bounds for sums of random matrices
- On regularization algorithms in learning theory
- A simple proof of the restricted isometry property for random matrices
- Uniform uncertainty principle for Bernoulli and subgaussian ensembles
- Randomized sketches for kernels: fast and optimal nonparametric regression
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- On some extensions of Bernstein's inequality for self-adjoint operators
- Learning theory estimates via integral operators and their approximations
- New and Improved Johnson–Lindenstrauss Embeddings via the Restricted Isometry Property
- Randomized Algorithms for Matrices and Data
- Learning Theory
- Support Vector Machines
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- A new concentration result for regularized risk minimizers
- Remarks on Inequalities for Large Deviation Probabilities
- Norm Inequalities Equivalent to Heinz Inequality
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
This page was built for publication: