Incremental Subgradient Methods for Nondifferentiable Optimization

From MaRDI portal
Publication:2784405

DOI10.1137/S1052623499362111zbMath0991.90099MaRDI QIDQ2784405

Dimitri P. Bertsekas, Angelia Nedić

Publication date: 23 April 2002

Published in: SIAM Journal on Optimization (Search for Journal in Brave)




Related Items

Strong consistency of random gradient‐free algorithms for distributed optimization, An estimation approach for the influential-imitator diffusion, Deep Learning for Marginal Bayesian Posterior Inference with Recurrent Neural Networks, Distributed adaptive clustering learning over time-varying multitask networks, Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator, Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations, Two stochastic optimization algorithms for convex optimization with fixed point constraints, Gradient projection methods for the $n$-coupling problem, Generalised gossip-based subgradient method for distributed optimisation, Performance of Some Approximate Subgradient Methods over Nonlinearly Constrained Networks, A merit function approach to the subgradient method with averaging, Essentials of numerical nonsmooth optimization, Imaging with highly incomplete and corrupted data, Path-based incremental target level algorithm on Riemannian manifolds, Projection algorithms with dynamic stepsize for constrained composite minimization, Abstract convergence theorem for quasi-convex optimization problems with applications, Convergence Rate of Incremental Gradient and Incremental Newton Methods, Event-triggered zero-gradient-sum distributed convex optimisation over networks with time-varying topologies, Incremental Quasi-Subgradient Method for Minimizing Sum of Geodesic Quasi-Convex Functions on Riemannian Manifolds with Applications, Essentials of numerical nonsmooth optimization, On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions, Communication-reducing algorithm of distributed least mean square algorithm with neighbor-partial diffusion, Constrained incremental bundle method with partial inexact oracle for nonsmooth convex semi-infinite programming problems, Subgradient method with feasible inexact projections for constrained convex optimization problems, A novel Lagrangian relaxation approach for a hybrid flowshop scheduling problem in the steelmaking-continuous casting process, Steered sequential projections for the inconsistent convex feasibility problem, A relaxed-projection splitting algorithm for variational inequalities in Hilbert spaces, Inexact subgradient methods for quasi-convex optimization problems, Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings, Network Synchronization with Convexity, On solving the Lagrangian dual of integer programs via an incremental approach, Proximal point algorithms for nonsmooth convex optimization with fixed point constraints, An Efficient Algorithm for Minimizing Multi Non-Smooth Component Functions, Rescheduling optimization of steelmaking-continuous casting process based on the Lagrangian heuristic algorithm, The proximal Chebychev center cutting plane algorithm for convex additive functions, Spectral projected subgradient with a momentum term for the Lagrangean dual approach, A subgradient method for multiobjective optimization on Riemannian manifolds, Incremental gradient-free method for nonsmooth distributed optimization, A decomposition-based solution method for stochastic mixed integer nonlinear programs, Accelerating incremental gradient optimization with curvature information, Dual decomposition for multi-agent distributed optimization with coupling constraints, A new step size rule for the superiorization method and its application in computerized tomography, A direct splitting method for nonsmooth variational inequalities, Distributed stochastic subgradient projection algorithms for convex optimization, Optimal subgradient algorithms for large-scale convex optimization in simple domains, Subgradient method for convex feasibility on Riemannian manifolds, On the computational efficiency of subgradient methods: a case study with Lagrangian bounds, Lagrangian relaxations on networks by \(\varepsilon \)-subgradient methods, Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs, An incremental bundle method for portfolio selection problem under second-order stochastic dominance, A cyclic iterative approach and its modified version to solve coupled Sylvester-transpose matrix equations, A partially inexact bundle method for convex semi-infinite minmax problems, Distributed event-triggered adaptive partial diffusion strategy under dynamic network topology, Convergence of random sleep algorithms for optimal consensus, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Almost sure convergence of random projected proximal and subgradient algorithms for distributed nonsmooth convex optimization, Random algorithms for convex minimization problems, Incremental proximal methods for large scale convex optimization, The effect of deterministic noise in subgradient methods, An Asynchronous Bundle-Trust-Region Method for Dual Decomposition of Stochastic Mixed-Integer Programming, An infeasible-point subgradient method using adaptive approximate projections, String-averaging incremental stochastic subgradient algorithms, Accelerating Sparse Recovery by Reducing Chatter, Primal-dual incremental gradient method for nonsmooth and convex optimization problems, Adaptive clustering based on element-wised distance for distributed estimation over multi-task networks, On a multistage discrete stochastic optimization problem with stochastic constraints and nested sampling, Subgradient algorithms on Riemannian manifolds of lower bounded curvatures, A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints, Approximate Subgradient Methods for Lagrangian Relaxations on Networks, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions, A distributed hierarchical algorithm for multi-cluster constrained optimization, Unnamed Item, Near-optimal stochastic approximation for online principal component estimation, Asynchronous Lagrangian scenario decomposition, Dynamic smoothness parameter for fast gradient methods, An incremental subgradient method on Riemannian manifolds, Bundle methods for sum-functions with ``easy components: applications to multicommodity network design, Cyclic and simultaneous iterative methods to matrix equations of the form \(A_iXB_i=F_i\), DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS, Convergence of the surrogate Lagrangian relaxation method, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems, Incremental-like bundle methods with application to energy planning, An inexact modified subgradient algorithm for nonconvex optimization, Gradient-free method for nonsmooth distributed optimization, Approximate subgradient methods for nonlinearly constrained network flow problems, An effective line search for the subgradient method, Convergence of online mirror descent, Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications, Forward-Backward-Half Forward Algorithm for Solving Monotone Inclusions, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, A decentralized multi-objective optimization algorithm, Stochastic First-Order Methods with Random Constraint Projection, Modified Fejér sequences and applications, Nesterov perturbations and projection methods applied to IMRT, Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions, Incremental without replacement sampling in nonconvex optimization, Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization, The stochastic trim-loss problem, Discrete-time gradient flows and law of large numbers in Alexandrov spaces, Generalized gradient learning on time series, Minimizing Piecewise-Concave Functions Over Polyhedra, Decentralized hierarchical constrained convex optimization, Mirror descent and nonlinear projected subgradient methods for convex optimization., Scaling Techniques for $\epsilon$-Subgradient Methods, A proximal-projection partial bundle method for convex constrained minimax problems, Incremental subgradient method for nonsmooth convex optimization with fixed point constraints, On the convergence of the forward–backward splitting method with linesearches, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Accelerating Stochastic Composition Optimization, Subgradient methods for saddle-point problems, Quasi-convex feasibility problems: subgradient methods and convergence rates, On perturbed steepest descent methods with inexact line search for bilevel convex optimization, Convergence rates of subgradient methods for quasi-convex optimization problems, The incremental subgradient methods on distributed estimations in-network, Weak subgradient method for solving nonsmooth nonconvex optimization problems, On Solving the Convex Semi-Infinite Minimax Problems via Superlinear 𝒱𝒰 Incremental Bundle Technique with Partial Inexact Oracle