scientific article; zbMATH DE number 7255124
From MaRDI portal
Publication:4969157
zbMath1502.68259arXiv2003.12210MaRDI QIDQ4969157
Ding-Xuan Zhou, Di Wang, Shao-Bo Lin
Publication date: 5 October 2020
Full work available at URL: https://arxiv.org/abs/2003.12210
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Ridge regression; shrinkage estimators (Lasso) (62J07) Learning and adaptive systems in artificial intelligence (68T05) Distributed algorithms (68W15)
Related Items (5)
A review of distributed statistical inference ⋮ Rejoinder on ‘A review of distributed statistical inference’ ⋮ Distributed smoothed rank regression with heterogeneous errors for massive data ⋮ Estimates on learning rates for multi-penalty distribution regression ⋮ Learning Coefficient Heterogeneity over Networks: A Distributed Spanning-Tree-Based Fused-Lasso Regression
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Divide and conquer local average regression
- Distributed regression learning with coefficient regularization
- A distributed one-step estimator
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Compactly supported positive definite radial functions
- Optimum bounds for the distributions of martingales in Banach spaces
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Universality of deep convolutional neural networks
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for coefficient-based regularized regression
- Optimal rates for the regularized least-squares algorithm
- On some extensions of Bernstein's inequality for self-adjoint operators
- Distributed learning with multi-penalty regularization
- Learning theory estimates via integral operators and their approximations
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Learning Theory
- Kernel techniques: From machine learning to meshless methods
- Deep distributed convolutional neural networks: Universality
- On Nonconvex Decentralized Gradient Descent
- Computational Limits of A Distributed Algorithm For Smoothing Spline
- Distributed learning with indefinite kernels
- Learning theory of distributed spectral algorithms
- An Introduction to Matrix Concentration Inequalities
This page was built for publication: