Sample size selection in optimization methods for machine learning

From MaRDI portal
Publication:715253

DOI10.1007/s10107-012-0572-5zbMath1252.49044OpenAlexW2061570747WikidataQ105583393 ScholiaQ105583393MaRDI QIDQ715253

Gillian M. Chin, Yuchen Wu, Nocedal, Jorge, Byrd, Richard H.

Publication date: 2 November 2012

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-012-0572-5



Related Items

Accelerating mini-batch SARAH by step size rules, A fully stochastic second-order trust region method, An inexact successive quadratic approximation method for L-1 regularized optimization, Algorithms for Kullback--Leibler Approximation of Probability Measures in Infinite Dimensions, A theoretical and empirical comparison of gradient approximations in derivative-free optimization, A family of second-order methods for convex \(\ell _1\)-regularized optimization, Descent direction method with line search for unconstrained optimization in noisy environment, Global convergence rate analysis of unconstrained optimization methods based on probabilistic models, Ritz-like values in steplength selections for stochastic gradient methods, Adaptive Sampling Strategies for Stochastic Optimization, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models, A nonmonotone line search method for stochastic optimization problems, Subsampled nonmonotone spectral gradient methods, Probability maximization via Minkowski functionals: convex representations and tractable resolution, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming, A trust region method for noisy unconstrained optimization, An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints, Statistically equivalent surrogate material models: impact of random imperfections on the elasto-plastic response, Risk-averse design of tall buildings for uncertain wind conditions, Adaptive stochastic approximation algorithm, Gradient-based optimisation of the conditional-value-at-risk using the multi-level Monte Carlo method, An overview of stochastic quasi-Newton methods for large-scale machine learning, A framework of convergence analysis of mini-batch stochastic projected gradient methods, Inexact restoration with subsampled trust-region methods for finite-sum minimization, Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization, A line search based proximal stochastic gradient algorithm with dynamical variance reduction, Hessian averaging in stochastic Newton methods achieves superlinear convergence, On Sampling Rates in Simulation-Based Recursions, Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization, Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations, Stable architectures for deep neural networks, Batched Stochastic Gradient Descent with Weighted Sampling, A subspace-accelerated split Bregman method for sparse data recovery with joint \(\ell_1\)-type regularizers, Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities, Gradient-Based Adaptive Stochastic Search for Simulation Optimization Over Continuous Space, Randomized Approach to Nonlinear Inversion Combining Random and Optimized Simultaneous Sources and Detectors, Deep Learning for Trivial Inverse Problems, Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization, Parallel Optimization Techniques for Machine Learning, Estimating the algorithmic variance of randomized ensembles via the bootstrap, Convergence of Newton-MR under Inexact Hessian Information, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Sub-sampled Newton methods, Spectral projected gradient method for stochastic optimization, Optimization Methods for Large-Scale Machine Learning, A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems, A deep learning semiparametric regression for adjusting complex confounding structures, Extragradient Method with Variance Reduction for Stochastic Variational Inequalities, Second-order orthant-based methods with enriched Hessian information for sparse \(\ell _1\)-optimization, Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches, A Stochastic Quasi-Newton Method for Large-Scale Optimization, Accelerating deep neural network training with inconsistent stochastic gradient descent, Convergence of the reweighted \(\ell_1\) minimization algorithm for \(\ell_2-\ell_p\) minimization, A Stochastic Line Search Method with Expected Complexity Analysis, A second-order method for convex1-regularized optimization with active-set prediction, A robust multi-batch L-BFGS method for machine learning, Solving inverse problems using data-driven models, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise, Adaptive Deep Learning for High-Dimensional Hamilton--Jacobi--Bellman Equations, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization, Linesearch Newton-CG methods for convex optimization with noise, Nonmonotone line search methods with variable sample size, Unnamed Item, Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization, LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums, Newton-like Method with Diagonal Correction for Distributed Optimization


Uses Software


Cites Work