A Newton-Raphson version of the multivariate Robbins-Monro procedure
From MaRDI portal
Publication:1061434
DOI10.1214/aos/1176346589zbMath0571.62072OpenAlexW2070709745MaRDI QIDQ1061434
Publication date: 1985
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aos/1176346589
asymptotic normalityasymptotic efficiencyNewton-RaphsonGauss-Newton algorithmRobbins-Monro procedureroot findinga.s. convergence
Central limit and other weak theorems (60F05) Strong limit theorems (60F15) Stochastic approximation (62L20)
Related Items
Efficient Robbins–Monro procedure for multivariate binary data ⋮ A gradient method for unconstrained optimization in noisy environment ⋮ Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization ⋮ Computing highly accurate confidence limits from discrete data using importance sampling ⋮ Randomized Newton-Raphson ⋮ Newton-based stochastic optimization using \(q\)-Gaussian smoothed functional algorithms ⋮ Multidimensional stochastic approximation ⋮ Adaptive random search for continuous simulation optimization ⋮ Efficient and fast estimation of the geometric median in Hilbert spaces with an averaged stochastic gradient algorithm ⋮ Stochastic approximation algorithms: overview and recent trends. ⋮ Regression estimation by local polynomial fitting for multivariate data streams ⋮ Sequential Estimation in Stochastic Approximation Problem with Autoregressive Errors in Observations ⋮ Stochastic approximation: from statistical origin to big-data, multidisciplinary applications ⋮ Non-asymptotic confidence bounds for stochastic approximation algorithms with constant step size ⋮ Fundamental design principles for reinforcement learning algorithms ⋮ Smoothed Functional Algorithms for Stochastic Optimization Using q -Gaussian Distributions ⋮ LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums