On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications
From MaRDI portal
Publication:2810547
DOI10.1137/140993090zbMath1338.65159OpenAlexW2398068802MaRDI QIDQ2810547
Chong Li, Yao-Hua Hu, Xiao Qi Yang
Publication date: 3 June 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/f93105648b3ea56d337c196c94b3862c956bea27
weak sharp minimafeasibility problemconvex composite optimizationsensor network localizationlinearized proximal algorithmquasi-regularity condition
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26)
Related Items
Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems ⋮ Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems ⋮ Convergence analysis of new inertial method for the split common null point problem ⋮ Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces ⋮ Iterative positive thresholding algorithm for non-negative sparse optimization ⋮ Two iterative processes generated by regular vector fields in Banach spaces ⋮ Weak Sharp Minima for Convex Infinite Optimization Problems in Normed Linear Spaces ⋮ Multiple-sets split quasi-convex feasibility problems: Adaptive subgradient methods with convergence guarantee ⋮ A dynamical system method for solving the split convex feasibility problem ⋮ On the superlinear convergence of Newton's method on Riemannian manifolds ⋮ The equivalence of three types of error bounds for weakly and approximately convex functions ⋮ Damped Newton's method on Riemannian manifolds ⋮ Riemannian linearized proximal algorithms for nonnegative inverse eigenvalue problem ⋮ A successive centralized circumcentered-reflection method for the convex feasibility problem ⋮ A modified inexact Levenberg-Marquardt method with the descent property for solving nonlinear equations ⋮ Convergence rate of the relaxed CQ algorithm under Hölderian type error bound property ⋮ Optimality conditions for composite DC infinite programming problems ⋮ Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications ⋮ Descent methods with computational errors in Banach spaces ⋮ Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems ⋮ Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions ⋮ Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems ⋮ A family of projection gradient methods for solving the multiple-sets split feasibility problem ⋮ Quantitative Analysis for Perturbed Abstract Inequality Systems in Banach Spaces ⋮ An adaptive fixed-point proximity algorithm for solving total variation denoising models ⋮ Unnamed Item ⋮ Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method ⋮ Abstract convergence theorem for quasi-convex optimization problems with applications ⋮ Quasi-convex feasibility problems: subgradient methods and convergence rates ⋮ Convergence rates of subgradient methods for quasi-convex optimization problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A proximal method for composite minimization
- Composite proximal bundle method
- Gradient methods for minimizing composite functions
- Strong KKT conditions and weak sharp solutions in convex-composite optimization
- Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems
- Tame functions are semismooth
- Error bounds in mathematical programming
- A Gauss-Newton method for convex composite optimization
- A nonsmooth version of Newton's method
- Optimization theory and methods. Nonlinear programming
- Inexact subgradient methods for quasi-convex optimization problems
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Weak Sharp Minima in Mathematical Programming
- Weak Sharp Minima for Semi-infinite Optimization Problems with Applications
- Majorizing Functions and Convergence of the Gauss–Newton Method for Convex Composite Optimization
- Further Relaxations of the Semidefinite Programming Approach to Sensor Network Localization
- Local properties of algorithms for minimizing nonsmooth composite functions
- Conditions for convergence of trust region algorithms for nonsmooth optimization
- Descent methods for composite nondifferentiable optimization problems
- Second order necessary and sufficient conditions for convex composite NDO
- First- and Second-Order Epi-Differentiability in Nonlinear Programming
- Stability Theory for Systems of Inequalities, Part II: Differentiable Nonlinear Systems
- Monotone Operators and the Proximal Point Algorithm
- Normalized Incremental Subgradient Algorithm and Its Application
- On Projection Algorithms for Solving Convex Feasibility Problems
- Second-order Sufficiency and Quadratic Growth for Nonisolated Minima
- Weak Sharp Minima: Characterizations and Sufficient Conditions
- Alternating Projections on Manifolds
- On convergence of the Gauss-Newton method for convex composite optimization.