Convergence analysis of stochastic higher-order majorization–minimization algorithms
From MaRDI portal
Publication:6586916
DOI10.1080/10556788.2023.2256447MaRDI QIDQ6586916
Publication date: 13 August 2024
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Gradient methods for minimizing composite functions
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Convergence of stochastic proximal gradient algorithm
- General convergence analysis of stochastic first-order methods for composite optimization
- Local convergence of tensor methods
- Smoothness parameter of power of Euclidean norm
- Implementable tensor methods in unconstrained convex optimization
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- Cubic regularization of Newton method and its global performance
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimization Methods for Large-Scale Machine Learning
- Inexact basic tensor methods for some classes of convex optimization problems
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
- Exact and inexact subsampled Newton methods for optimization
This page was built for publication: Convergence analysis of stochastic higher-order majorization–minimization algorithms