Remarks on Inequalities for Large Deviation Probabilities
From MaRDI portal
Publication:3706238
DOI10.1137/1130013zbMath0583.60023OpenAlexW2065908952MaRDI QIDQ3706238
Iosif Pinelis, Aleksandr Ivanovich Sakhanenko
Publication date: 1986
Published in: Theory of Probability & Its Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/1130013
Inequalities; stochastic orderings (60E15) Probability measures on topological spaces (60B05) Large deviations (60F10)
Related Items
Optimal-order bounds on the rate of convergence to normality in the multivariate delta method, LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT, Efficient estimation and computation for the generalised additive models with unknown link function, Unnamed Item, Unnamed Item, On regularization algorithms in learning theory, Integral operator approach to learning theory with unbounded sampling, Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems, Convex regularization in statistical inverse learning problems, Unnamed Item, Sample average approximations of strongly convex stochastic programs in Hilbert spaces, Coefficient-based regularized distribution regression, Reliable Error Estimates for Optimal Control of Linear Elliptic PDEs with Random Inputs, Sample Size Estimates for Risk-Neutral Semilinear PDE-Constrained Optimization, On the Bennett-Hoeffding inequality, Concentration inequalities for separately convex functions, A stochastic procedure to solve linear ill-posed problems, Kernel conjugate gradient methods with random projections, CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY, Estimation of high-dimensional low-rank matrices, A Kernel Multiple Change-point Algorithm via Model Selection, Kernel regression estimation in a Banach space, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Elastic-net regularization in learning theory, A mean-field optimal control formulation of deep learning, Optimal Re-centering Bounds, with Applications to Rosenthal-type Concentration of Measure Inequalities, Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods, Improved concentration bounds for sums of independent sub-exponential random variables, Unnamed Item, Unnamed Item, Optimal rates for coefficient-based regularized regression, Robbins-Monro algorithm with \(\psi\)-mixing random errors, Approximation and exponential inequalities for sums of dependent random vectors, On a majorization inequality for sums of independent random vectors