A dual Bregman proximal gradient method for relatively-strongly convex optimization
From MaRDI portal
Publication:2092292
DOI10.3934/naco.2021028zbMath1497.90150OpenAlexW3184536587MaRDI QIDQ2092292
Publication date: 2 November 2022
Published in: Numerical Algebra, Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/naco.2021028
Numerical mathematical programming methods (65K05) Convex programming (90C25) Optimality conditions and duality in mathematical programming (90C46)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- A simplified view of first order methods for optimization
- A fast dual proximal gradient algorithm for convex minimization and applications
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Optimum Designs in Regression Problems
- Image deblurring with Poisson data: from cells to galaxies
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- A generalized proximal point algorithm for certain non-convex minimization problems
- First-Order Methods in Optimization
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
This page was built for publication: A dual Bregman proximal gradient method for relatively-strongly convex optimization