An efficient primal dual prox method for non-smooth optimization
From MaRDI portal
Publication:2339936
DOI10.1007/s10994-014-5436-1zbMath1311.90188arXiv1201.5283OpenAlexW2034410976MaRDI QIDQ2339936
Shenghuo Zhu, Rong Jin, Mehrdad Mahdavi, Tianbao Yang
Publication date: 14 April 2015
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1201.5283
Derivative-free methods and methods using generalized derivatives (90C56) Learning and adaptive systems in artificial intelligence (68T05) Nonsmooth analysis (49J52)
Related Items (8)
Approximately nearest neighborhood image search using unsupervised hashing via homogeneous kernels ⋮ Efficient computation of the nearest polynomial by linearized alternating direction method ⋮ An asynchronous subgradient-proximal method for solving additive convex optimization problems ⋮ The nearest polynomial to multiple given polynomials with a given zero: a unified optimization approach ⋮ RSG: Beating Subgradient Method without Smoothness and Strong Convexity ⋮ Point process estimation with Mirror Prox algorithms ⋮ Accelerate stochastic subgradient method by leveraging local growth condition ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- An optimal method for stochastic composite optimization
- Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Pegasos: primal estimated sub-gradient solver for SVM
- Convex multi-task feature learning
- A feature selection Newton method for support vector machine classification
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Exact matrix completion via convex optimization
- Optimization with Sparsity-Inducing Penalties
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- A forward–backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Monotone Operators and the Proximal Point Algorithm
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Are Loss Functions All the Same?
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Model Selection and Estimation in Regression with Grouped Variables
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: An efficient primal dual prox method for non-smooth optimization