Scaling Techniques for $\epsilon$-Subgradient Methods
DOI10.1137/14097642XzbMath1347.65106arXiv1407.6133OpenAlexW2510723730MaRDI QIDQ2817840
Valeria Ruggiero, Silvia Bonettini, Alessandro Benfenati
Publication date: 2 September 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1407.6133
convergenceimage restorationvariable metricnonsmooth convex problemsforward-backward \(\epsilon\)-subgradient methodscaled primal-dual hybrid gradient algorithmstep size selection rulesTV restoration
Numerical mathematical programming methods (65K05) Convex programming (90C25) Numerical aspects of computer graphics, image analysis, and computational geometry (65D18) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08)
Related Items (3)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function
- Variable metric quasi-Fejér monotonicity
- A limited memory steepest descent method
- Linear convergence of iterative soft-thresholding
- An inertial forward-backward algorithm for monotone inclusions
- New adaptive stepsize selections in gradient methods
- An affine-scaling interior-point CBB method for box-constrained optimization
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- Error stability properties of generalized gradient-type algorithms
- On the projected subgradient method for nonsmooth convex optimization in a Hilbert space
- Convergence of a simple subgradient level method
- Convergence of some algorithms for convex minimization
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of epsilon-subgradient descent methods for a class of convex functions
- A descent proximal level bundle method for convex nondifferentiable optimization
- On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration
- Bregman operator splitting with variable stepsize for total variation image reconstruction
- On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions
- Non-negatively constrained image deblurring with an inexact interior point method
- Total variation-penalized Poisson likelihood estimation for ill-posed problems
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization
- An $\mathcal O(1/{k})$ Convergence Rate for the Variable Stepsize Bregman Operator Splitting Algorithm
- On the convergence of the forward–backward splitting method with linesearches
- A convergent blind deconvolution method for post-adaptive-optics astronomical imaging
- On spectral properties of steepest descent methods
- Accelerated and Inexact Forward-Backward Algorithms
- Scaling techniques for gradient projection-type methods in astronomical image deblurring
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- Online Learning and Online Convex Optimization
- An alternating extragradient method for total variation-based image restoration from Poisson data
- A scaled gradient projection method for constrained image deblurring
- Efficient gradient projection methods for edge-preserving removal of Poisson noise
- Image deblurring with Poisson data: from cells to galaxies
- stochastic quasigradient methods and their application to system optimization†
- Two-Point Step Size Gradient Methods
- On convergence rates of subgradient optimization methods
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Penalized maximum likelihood image restoration with positivity constraints: multiplicative algorithms
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part I: General Level Methods
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part II: Implementations and Extensions
- Nonnegative image reconstruction from sparse Fourier data: a new deconvolution algorithm
- On the Convergence of Primal-Dual Hybrid Gradient Algorithm
- Convergence Analysis of Deflected Conditional Approximate Subgradient Methods
- A Scaled Gradient Projection Method for Bayesian Learning in Dynamical Systems
- Restoration of Poissonian Images Using Alternating Direction Optimization
- Covariance-Preconditioned Iterative Methods for Nonnegatively Constrained Astronomical Imaging
- The cyclic Barzilai-–Borwein method for unconstrained optimization
- Variable metric forward–backward splitting with applications to monotone inclusions in duality
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- A general method to devise maximum-likelihood signal restoration multiplicative algorithms with non-negativity constraints.
This page was built for publication: Scaling Techniques for $\epsilon$-Subgradient Methods