Optimal rates for regularization of statistical inverse learning problems

From MaRDI portal
Publication:667648

DOI10.1007/s10208-017-9359-7zbMath1412.62042arXiv1604.04054OpenAlexW2963053844MaRDI QIDQ667648

Nicole Mücke, Gilles Blanchard

Publication date: 1 March 2019

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1604.04054




Related Items (37)

Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernelTwo-Layer Neural Networks with Values in a Banach SpaceDistributed spectral pairwise ranking algorithmsLower bounds for invariant statistical models with applications to principal component analysisShearlet-based regularization in statistical inverse learning with an application to x-ray tomographyUnnamed ItemUnnamed ItemRates of convergence of randomized Kaczmarz algorithms in Hilbert spacesConvergence of regularization methods with filter functions for a regularization parameter chosen with GSURE and mildly ill-posed inverse problemsA note on the prediction error of principal component regression in high dimensionsMini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022Learning rates for the kernel regularized regression with a differentiable strongly convex lossTikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problemsConvergence analysis of Tikhonov regularization for non-linear statistical inverse problemsConvex regularization in statistical inverse learning problemsUnnamed ItemOptimality of regularized least squares ranking with imperfect kernelsInverse learning in Hilbert scalesSketching with Spherical Designs for Noisy Data Fitting on SpheresNonlinear Tikhonov regularization in Hilbert scales for inverse learningKernel regression, minimax rates and effective dimensionality: Beyond the regular caseOn the Improved Rates of Convergence for Matérn-Type Kernel Ridge Regression with Application to Calibration of Computer ModelsKernel conjugate gradient methods with random projectionsNyström subsampling method for coefficient-based regularized regressionOnline regularized pairwise learning with least squares lossConvergence analysis of distributed multi-penalty regularized pairwise learningConvergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel LearningThe empirical process of residuals from an inverse regressionConcentration of weakly dependent Banach-valued sums and applications to statistical learning methodsAn elementary analysis of ridge regression with random designUnnamed ItemUnnamed ItemUnnamed ItemBayesian frequentist bounds for machine learning and system identificationError analysis of the kernel regularized regression based on refined convex losses and RKBSsFrom inexact optimization to learning via gradient concentrationRegularization: From Inverse Problems to Large-Scale Machine Learning



Cites Work


This page was built for publication: Optimal rates for regularization of statistical inverse learning problems