The following pages link to (Q2934047):
Displaying 34 items.
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth (Q523179) (← links)
- Distributed block-diagonal approximation methods for regularized empirical risk minimization (Q782443) (← links)
- Incremental learning for \(\nu\)-support vector regression (Q1669094) (← links)
- Consistent algorithms for multiclass classification with an abstain option (Q1697489) (← links)
- Stochastic block-coordinate gradient projection algorithms for submodular maximization (Q1723100) (← links)
- Multi-label Lagrangian support vector machine with random block coordinate descent method (Q1750531) (← links)
- New characterizations of Hoffman constants for systems of linear constraints (Q2020601) (← links)
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems (Q2070400) (← links)
- Convergence results of a nested decentralized gradient method for non-strongly convex problems (Q2082236) (← links)
- Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity (Q2115253) (← links)
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence (Q2125304) (← links)
- Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition (Q2128612) (← links)
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis (Q2133415) (← links)
- Randomness and permutations in coordinate descent methods (Q2189444) (← links)
- Augmented Lagrangian optimization under fixed-point arithmetic (Q2208540) (← links)
- Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM (Q2279378) (← links)
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652) (← links)
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate (Q2301128) (← links)
- Linearly convergent away-step conditional gradient for non-strongly convex functions (Q2364483) (← links)
- Linear convergence of first order methods for non-strongly convex optimization (Q2414900) (← links)
- Projection onto a polyhedron that exploits sparsity (Q2817841) (← links)
- Linear convergence of descent methods for the unconstrained minimization of restricted strongly convex functions (Q2821800) (← links)
- A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization (Q3387904) (← links)
- Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds (Q3465244) (← links)
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity (Q4558142) (← links)
- (Q4558572) (← links)
- On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent (Q4969070) (← links)
- Accelerate stochastic subgradient method by leveraging local growth condition (Q5236746) (← links)
- Iteration Complexity of a Block Coordinate Gradient Descent Method for Convex Optimization (Q5501228) (← links)
- Nonlinear optimization and support vector machines (Q5915690) (← links)
- Nonlinear optimization and support vector machines (Q5918754) (← links)
- An aggressive reduction on the complexity of optimization for non-strongly convex objectives (Q6052286) (← links)
- Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems (Q6165595) (← links)
- An easily computable upper bound on the Hoffman constant for homogeneous inequality systems (Q6179881) (← links)