scientific article
From MaRDI portal
Publication:3320132
zbMath0535.90071MaRDI QIDQ3320132
Publication date: 1983
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
estimationconvergence rateconstrained minimizationglobal Lipschitz conditiondifferentiable objective functionconvex programming in Hilbert space
Numerical mathematical programming methods (65K05) Convex programming (90C25) Programming in abstract spaces (90C48) Methods of successive quadratic programming type (90C55) Inner product spaces and their generalizations, Hilbert spaces (46C99)
Related Items
Convergence Analysis of Volumetric Stretch Energy Minimization and Its Associated Optimal Mass Transport, Fast convergence of inertial dynamics with Hessian-driven damping under geometry assumptions, Inertial-based extragradient algorithm for approximating a common solution of split-equilibrium problems and fixed-point problems of nonexpansive semigroups, On the strong convergence of continuous Newton-like inertial dynamics with Tikhonov regularization for monotone inclusions, Some accelerated alternating proximal gradient algorithms for a class of nonconvex nonsmooth problems, A Unifying Framework and Comparison of Algorithms for Non‐negative Matrix Factorisation, An ordinary differential equation for modeling Halpern fixed-point Algorithm, Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches, Novel projection neurodynamic approaches for constrained convex optimization, A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks, Nesterov's acceleration for level set-based topology optimization using reaction-diffusion equations, Eigenvalue-Corrected Natural Gradient Based on a New Approximation, Inertial extrapolation method with regularization for solving a new class of bilevel problem in real Hilbert spaces, Strong Convergence of Trajectories via Inertial Dynamics Combining Hessian-Driven Damping and Tikhonov Regularization for General Convex Minimizations, A data-driven Kaczmarz iterative regularization method with non-smooth constraints for ill-posed problems, Practical perspectives on symplectic accelerated optimization, Fast Krasnosel’skiĭ–Mann Algorithm with a Convergence Rate of the Fixed Point Iteration of \(\boldsymbol{{ o} \left(\frac{1}{{ k}} \right)}\), Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization, Doubly iteratively reweighted algorithm for constrained compressed sensing models, Two-step inertial forward-reflected-backward splitting based algorithm for nonconvex mixed variational inequalities, Accelerated sparse recovery via gradient descent with nonlinear conjugate gradient momentum, Generalized damped Newton algorithms in nonsmooth optimization via second-order subdifferentials, Inertial projected gradient method for large-scale topology optimization, Direct nonlinear acceleration, FISTA is an automatic geometrically optimized algorithm for strongly convex functions, Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods, No-regret algorithms in on-line learning, games and convex optimization, Finding a common solution of variational inequality and fixed point problems using subgradient extragradient techniques, Radial duality. II: Applications and algorithms, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, A self adaptive method for solving a class of bilevel variational inequalities with split variational inequality and composed fixed point problem constraints in Hilbert spaces, On a general structure for adaptation/learning algorithms. -- Stability and performance issues, An improved relaxed inertial projection algorithm for solving the minimum-norm solution of variational inequality and fixed point problems, Weak and strong convergence of a modified double inertial projection algorithm for solving variational inequality problems, First order inertial optimization algorithms with threshold effects associated with dry friction, SOLO FTRL algorithm for production management with transfer prices, A control theoretic framework for adaptive gradient optimizers, Time-adaptive Lagrangian variational integrators for accelerated optimization, Accelerated doubly stochastic gradient descent for tensor CP decomposition, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Principled analyses and design of first-order methods with inexact proximal operators, Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian, First-order methods for convex optimization, Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022, A Projected Nesterov–Kaczmarz Approach to Stellar Population-Kinematic Distribution Reconstruction in Extragalactic Archaeology, Image restorations using a modified relaxed inertial technique for generalized split feasibility problems, Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems, A second order primal-dual dynamical system for a convex-concave bilinear saddle point problem, Strongly convergent inertial forward-backward-forward algorithm without on-line rule for variational inequalities, On mathematical modeling in image reconstruction and beyond, Open issues and recent advances in DC programming and DCA, Unnamed Item, Recent Theoretical Advances in Non-Convex Optimization, Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling, An inertial based forward-backward algorithm for monotone inclusion problems and split mixed equilibrium problems in Hilbert spaces, New inertial projection methods for solving multivalued variational inequality problems beyond monotonicity, Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization, Generalized unnormalized optimal transport and its fast algorithms, An accelerated minimal gradient method with momentum for strictly convex quadratic optimization, On obtaining sparse semantic solutions for inverse problems, control, and neural network training, Asymptotic for a second order evolution equation with damping and regularizing terms, An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications, First-order optimization algorithms via inertial systems with Hessian driven damping, First-order inertial algorithms involving dry friction damping, Accelerated optimization on Riemannian manifolds via discrete constrained variational integrators, Sparse matrix linear models for structured high-throughput data, Fast and stable nonconvex constrained distributed optimization: the ELLADA algorithm, Accelerated proximal gradient method for bi-modulus static elasticity, Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics, Douglas-Rachford splitting and ADMM for nonconvex optimization: accelerated and Newton-type linesearch algorithms, An inertial Bregman generalized alternating direction method of multipliers for nonconvex optimization, A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems, Inertial stochastic PALM and applications in machine learning, On the non-symmetric semidefinite Procrustes problem, Two optimization approaches for solving split variational inclusion problems with applications, Riemannian proximal gradient methods, Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration, An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems, An accelerated common fixed point algorithm for a countable family of \(G\)-nonexpansive mappings with applications to image recovery, Convergence rates of a dual gradient method for constrained linear ill-posed problems, Inertial accelerated primal-dual methods for linear equality constrained convex optimization problems, Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle, Relaxed inertial methods for solving split variational inequality problems without product space formulation, Fast inertial dynamic algorithm with smoothing method for nonsmooth convex optimization, A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives, An inertially constructed forward-backward splitting algorithm in Hilbert spaces, Solving common nonmonotone equilibrium problems using an inertial parallel hybrid algorithm with Armijo line search with applications to image recovery, An inertial parallel algorithm for a finite family of \(G\)-nonexpansive mappings with application to the diffusion problem, A parallel Tseng's splitting method for solving common variational inclusion applied to signal recovery problems, Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces, Laplacian smoothing gradient descent, The computational asymptotics of Gaussian variational inference and the Laplace approximation, Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem, ASD+M: automatic parameter tuning in stochastic optimization and on-line learning, A fast iterative algorithm for high-dimensional differential network, Transportless conjugate gradient for optimization on Stiefel manifold, Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients, Social welfare and profit maximization from revealed preferences, Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, An accelerated first-order method with complexity analysis for solving cubic regularization subproblems, Fastest rates for stochastic mirror descent methods, A projected extrapolated gradient method with larger step size for monotone variational inequalities, On the convergence of a class of inertial dynamical systems with Tikhonov regularization, New inertial relaxed method for solving split feasibilities, Continuous Newton-like inertial dynamics for monotone inclusions, Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization, Inertial iterative algorithms for common solution of variational inequality and system of variational inequalities problems, Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems, Accelerated information gradient flow, EGC: entropy-based gradient compression for distributed deep learning, \(\mathrm{B}\)-subdifferentials of the projection onto the matrix simplex, Variational inequality over the set of common solutions of a system of bilevel variational inequality problem with applications, Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks, A fast and efficient smoothing approach to Lasso regression and an application in statistical genetics: polygenic risk scores for chronic obstructive pulmonary disease (COPD), Determining a time-dependent coefficient in a time-fractional diffusion-wave equation with the Caputo derivative by an additional integral condition, An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing, An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization, Iteration complexity of generalized complementarity problems, Learning context-dependent choice functions, Damped inertial dynamics with vanishing Tikhonov regularization: strong asymptotic convergence towards the minimum norm solution, New inertial proximal gradient methods for unconstrained convex optimization problems, A proximal point like method for solving tensor least-squares problems, A piecewise conservative method for unconstrained convex optimization, Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem, Two-stage geometric information guided image reconstruction, Weak and strong convergence of inertial algorithms for solving split common fixed point problems, An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems, The inertial relaxed algorithm with Armijo-type line search for solving multiple-sets split feasibility problem, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, A strongly convergent algorithm for solving common variational inclusion with application to image recovery problems, Alternating direction based method for optimal control problem constrained by Stokes equation, How does momentum benefit deep neural networks architecture design? A few case studies, Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces, Self adaptive inertial relaxed \(CQ\) algorithms for solving split feasibility problem with multiple output sets, An accelerated differential equation system for generalized equations, Convergence rates of first- and higher-order dynamics for solving linear ill-posed problems, SRKCD: a stabilized Runge-Kutta method for stochastic optimization, Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization, Understanding the acceleration phenomenon via high-resolution differential equations, From differential equation solvers to accelerated first-order methods for convex optimization, Convergence rates of damped inerial dynamics from multi-degree-of-freedom system, Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization, Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem, A stochastic gradient algorithm with momentum terms for optimal control problems governed by a convection-diffusion equation with random diffusivity, On the effect of perturbations in first-order optimization methods with inertia and Hessian driven damping, A fast continuous time approach with time scaling for nonsmooth convex optimization, A nested primal-dual FISTA-like scheme for composite convex optimization problems, Fast inertial extragradient algorithms for solving non-Lipschitzian equilibrium problems without monotonicity condition in real Hilbert spaces, Generating Nesterov's accelerated gradient algorithm by using optimal control theory for optimization, An inertial Mann forward-backward splitting algorithm of variational inclusion problems and its applications, Viscosity \(S\)-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems, An inertial semi-forward-reflected-backward splitting and its application, A secant-based Nesterov method for convex functions, Reference and command governors for systems with constraints: A survey on theory and applications, Global optimization issues in deep network regression: an overview, Certification aspects of the fast gradient method for solving the dual of parametric convex programs, Nonlinear regularization techniques for seismic tomography, Exploring critical points of energy landscapes: from low-dimensional examples to phase field crystal PDEs, Sparse regression with multi-type regularized feature modeling, Combining fast inertial dynamics for convex optimization with Tikhonov regularization, Accelerated additive Schwarz methods for convex optimization with adaptive restart, An engineering interpretation of Nesterov's convex minimization algorithm and time integration: application to optimal fiber orientation, A fast two-point gradient algorithm based on sequential subspace optimization method for nonlinear ill-posed problems, Bias-compensated affine-projection-like algorithm based on maximum correntropy criterion for robust filtering, An improved inertial extragradient subgradient method for solving split variational inequality problems, Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity, Application of Monte Carlo stochastic optimization (MOST) to deep learning, A multiplicative weight updates algorithm for packing and covering semi-infinite linear programs, Nearly linear-time packing and covering LP solvers. Nearly linear-time packing and covering LP solvers, achieving width-independence and \(=(1/\varepsilon)\)-convergence, Fast gradient methods for uniformly convex and weakly smooth problems, An optimal subgradient algorithm with subspace search for costly convex optimization problems, Newton-type inertial algorithms for solving monotone equations Governed by sums of potential and nonpotential operators, A novel algorithm for generalized split common null point problem with applications, Adaptive \(l_1\)-regularization for short-selling control in portfolio selection, Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization, Alternating forward-backward splitting for linearly constrained optimization problems, An unexpected connection between Bayes \(A\)-optimal designs and the group Lasso, Second-order flows for computing the ground states of rotating Bose-Einstein condensates, Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks, Contrast invariant SNR and isotonic regressions, Accelerated gradient boosting, A differential variational approach for handling fluid-solid interaction problems via smoothed particle hydrodynamics, Efficient iterative solution of finite element discretized nonsmooth minimization problems, Convergence rates of the heavy-ball method under the Łojasiewicz property, Generalized self-concordant analysis of Frank-Wolfe algorithms, Nonlinear acceleration of momentum and primal-dual algorithms, An algorithm for split equilibrium and fixed-point problems using inertial extragradient techniques, Large-scale distributed sparse class-imbalance learning, Synchronous parallel block coordinate descent method for nonsmooth convex function minimization, Topology optimization method with nonlinear diffusion, Mean curvature flow for generating discrete surfaces with piecewise constant mean curvatures, Bregman Itoh-Abe methods for sparse optimisation, Optimal convergence rates for damped inertial gradient dynamics with flat geometries, A simple nearly optimal restart scheme for speeding up first-order methods, A matrix nonconvex relaxation approach to unconstrained binary polynomial programs, Lower bounds for finding stationary points I, Efficient first-order methods for convex minimization: a constructive approach, Convergence of a relaxed inertial proximal algorithm for maximally monotone operators, Fast convergence of inertial gradient dynamics with multiscale aspects, Accelerated stochastic variance reduction for a class of convex optimization problems, NESTANets: stable, accurate and efficient neural networks for analysis-sparse inverse problems, How can machine learning and optimization help each other better?, An active-set proximal-Newton algorithm for \(\ell_1\) regularized optimization problems with box constraints, Perron vector optimization applied to search engines, An algorithm for the split feasible problem and image restoration, A fast two-point gradient method for solving non-smooth nonlinear ill-posed problems, Numerical computations of split Bregman method for fourth order total variation flow, An accelerated Uzawa method for application to frictionless contact problem, Tikhonov regularization of a second order dynamical system with Hessian driven damping, Alternating minimization methods for strongly convex optimization, Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing damping, An extension of the second order dynamical system that models Nesterov's convex gradient method, A review of nonlinear FFT-based computational homogenization methods, Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization, Effect of shrinking projection and CQ-methods on two inertial forward-backward algorithms for solving variational inclusion problems, Modified hybrid projection methods with SP iterations for quasi-nonexpansive multivalued mappings in Hilbert spaces, Performance of first-order methods for smooth convex minimization: a novel approach, Projected subgradient minimization versus superiorization, On starting and stopping criteria for nested primal-dual iterations, Optimizing cluster structures with inner product induced norm based dissimilarity measures: theoretical development and convergence analysis, Regularized nonlinear acceleration, On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope, Cohesive networks using delayed self reinforcement, Accelerated gradient-free optimization methods with a non-Euclidean proximal operator, Phase recovery, MaxCut and complex semidefinite programming, Comparative study of RPSALG algorithm for convex semi-infinite programming, A neural network approach to efficient valuation of large portfolios of variable annuities, Convergence rate of inertial forward-backward algorithm beyond Nesterov's rule, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, A proximal regularized Gauss-Newton-Kaczmarz method and its acceleration for nonlinear ill-posed problems, Accelerated variational PDEs for efficient solution of regularized inversion problems, Provable accelerated gradient method for nonconvex low rank optimization, Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization, A comparison of numerical methods for solving multibody dynamics problems with frictional contact modeled via differential variational inequalities, Advances in the simulation of viscoplastic fluid flows using interior-point methods, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis, Optimal subgradient methods: computational properties for large-scale linear inverse problems, Asymptotic equivalence of evolution equations governed by cocoercive operators and their forward discretizations, Deep relaxation: partial differential equations for optimizing deep neural networks, Numerical optimal control of a size-structured PDE model for metastatic cancer treatment, Oracle complexity of second-order methods for smooth convex optimization, Regularization of inverse problems by two-point gradient methods in Banach spaces, PDE acceleration: a convergence rate analysis and applications to obstacle problems, Generalized affine scaling algorithms for linear programming problems, Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions, Similarity preserving low-rank representation for enhanced data representation and effective subspace learning, An efficient monotone projected Barzilai-Borwein method for nonnegative matrix factorization, A fast image recovery algorithm based on splitting deblurring and denoising, Adaptive Euclidean maps for histograms: generalized Aitchison embeddings, Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization, Adaptive restart for accelerated gradient schemes, A regularizing multilevel approach for nonlinear inverse problems, Accelerated proximal algorithms with a correction term for monotone inclusions, Stochastic accelerated alternating direction method of multipliers with importance sampling, Diffusion tensor imaging with deterministic error bounds, A relaxed-projection splitting algorithm for variational inequalities in Hilbert spaces, OSGA: a fast subgradient algorithm with optimal complexity, A linear-time algorithm for trust region problems, A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron, A convergent least-squares regularized blind deconvolution approach, Optimized first-order methods for smooth convex minimization, New results on subgradient methods for strongly convex optimization problems with a unified analysis, Comparison of minimization methods for nonsmooth image segmentation, Fast convex optimization via inertial dynamics with Hessian driven damping, Efficient valuation of SCR via a neural network approach, Sparse estimation of high-dimensional correlation matrices, A new fast algorithm for constrained four-directional total variation image denoising problem, Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization, Clustering and feature selection using sparse principal component analysis, Accelerated parallel and distributed algorithm using limited internal memory for nonnegative matrix factorization, Approximation accuracy, gradient methods, and error bound for structured convex optimization, An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix, Large-scale eigenvector approximation via Hilbert space embedding Nyström, An alternating direction method for finding Dantzig selectors, Domain adaptation and sample bias correction theory and algorithm for regression, A conjugate subgradient algorithm with adaptive preconditioning for the least absolute shrinkage and selection operator minimization, iPiasco: inertial proximal algorithm for strongly convex optimization, An efficient primal-dual method for the obstacle problem, Proximal algorithms for multicomponent image recovery problems, A linear-time algorithm for the trust region subproblem based on hidden convexity, A semi-analytical approach for the positive semidefinite Procrustes problem, Feature-aware regularization for sparse online learning, Optimal subgradient algorithms for large-scale convex optimization in simple domains, Operator splittings, Bregman methods and frame shrinkage in image processing, Image restoration based on the hybrid total-variation-type model, Metric selection in fast dual forward-backward splitting, Stochastic heavy ball, Sparse adaptive parameterization of variability in image ensembles, A cyclic projected gradient method, On the reconstruction of media inhomogeneity by inverse wave scattering model, The Shannon total variation, Convergence of damped inertial dynamics governed by regularized maximally monotone operators, Splitting and linearizing augmented Lagrangian algorithm for subspace recovery from corrupted observations, A global piecewise smooth Newton method for fast large-scale model predictive control, Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient, A block coordinate gradient descent method for regularized convex separable optimization and covariance selection, Random algorithms for convex minimization problems, Quadratic regularization projected Barzilai-Borwein method for nonnegative matrix factorization, Robust least square semidefinite programming with applications, Bound alternative direction optimization for image deblurring, Mixed higher order variational model for image recovery, A full RNS variant of approximate homomorphic encryption, Distributed adaptive dynamic programming for data-driven optimal control, A stable method solving the total variation dictionary model with \(L^\infty\) constraints, Inertial forward-backward algorithms with perturbations: application to Tikhonov regularization, Dual subgradient algorithms for large-scale nonsmooth learning problems, Convergence of the augmented decomposition algorithm, A duality based approach to the minimizing total variation flow in the space \(H^{-s}\), On variance reduction for stochastic smooth convex optimization with multiplicative noise, Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators, Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs, Accelerated proximal gradient method for elastoplastic analysis with von Mises yield criterion, Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases, A proximal difference-of-convex algorithm with extrapolation, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, Smooth strongly convex interpolation and exact worst-case performance of first-order methods, Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions, Information-based complexity of linear operator equations, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), On the proximal gradient algorithm with alternated inertia, Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, A finite element/operator-splitting method for the numerical solution of the two dimensional elliptic Monge-Ampère equation, Iterative algorithms for total variation-like reconstructions in seismic tomography, Efficient multiplicative noise removal method using isotropic second order total variation, Directional total generalized variation regularization, On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems, Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems, Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions, A wavelet frame approach for removal of mixed Gaussian and impulse noise on surfaces, An accelerated method for nonlinear elliptic PDE, Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Nonconvex robust programming via value-function optimization, Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity, An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems, Lagrangian penalization scheme with parallel forward-backward splitting, A second-order adaptive Douglas-Rachford dynamic method for maximal \(\alpha\)-monotone operators, Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach, Minimax and Minimax Projection Designs Using Clustering, Proximal algorithms in statistics and machine learning, Algorithms for positive semidefinite factorization, A nonmonotone gradient algorithm for total variation image denoising problems, Clustering of fuzzy data and simultaneous feature selection: a model selection approach, Is there an analog of Nesterov acceleration for gradient-based MCMC?, An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces, A stochastic subspace approach to gradient-free optimization in high dimensions, Augmented Lagrangian algorithms for linear programming, A reweighted \(\ell^2\) method for image restoration with Poisson and mixed Poisson-Gaussian noise, An efficient nonmonotone projected Barzilai–Borwein method for nonnegative matrix factorization with extrapolation, Accelerated Gradient Descent Methods for the Uniaxially Constrained Landau-de Gennes Model, Block Bregman Majorization Minimization with Extrapolation, Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization, A novel two-point gradient method for regularization of inverse problems in Banach spaces, An Accelerated Level-Set Method for Inverse Scattering Problems, Accelerated projected gradient method with adaptive step size for compliance minimization problem, Proximal Splitting Methods in Signal Processing, Low-Rank and Sparse Dictionary Learning, Accelerated, Parallel, and Proximal Coordinate Descent, Approximation of solutions of the split minimization problem with multiple output sets and common fixed point problems in real Banach spaces, Nesterov’s accelerated gradient method for nonlinear ill-posed problems with a locally convex residual functional, The rate of convergence of optimization algorithms obtained via discretizations of heavy ball dynamical systems for convex optimization problems, Interior Tomography Using 1D Generalized Total Variation. Part II: Multiscale Implementation, Deep Learning--Based Dictionary Learning and Tomographic Image Reconstruction, Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration, A double projection algorithm with inertial effects for solving split feasibility problems and applications to image restoration, A generalized adaptive Levenberg–Marquardt method for solving nonlinear ill-posed problems, Decomposition Methods for Sparse Matrix Nearness Problems, Exact gradient methods with memory, MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization, Faster Lagrangian-Based Methods in Convex Optimization, Structured Sparsity: Discrete and Convex Approaches, GMRES-Accelerated ADMM for Quadratic Objectives, Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms, Two New Inertial Algorithms for Solving Variational Inequalities in Reflexive Banach Spaces, A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer, A Projected Gradient and Constraint Linearization Method for Nonlinear Model Predictive Control, Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method, Unnamed Item, Accelerated Methods for NonConvex Optimization, On Efficiently Solving the Subproblems of a Level-Set Method for Fused Lasso Problems, Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization, Modified ADMM algorithm for solving proximal bound formulation of multi-delay optimal control problem with bounded control, Unnamed Item, Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence, A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems, The Differential Inclusion Modeling FISTA Algorithm and Optimality of Convergence Rate in the Case b $\leq3$, A Multiplicative Weights Update Algorithm for Packing and Covering Semi-infinite Linear Programs, Convergence Rates of Inertial Forward-Backward Algorithms, Unnamed Item, Unnamed Item, A generic online acceleration scheme for optimization algorithms via relaxation and inertia, An accelerated primal-dual iterative scheme for the L 2 -TV regularized model of linear inverse problems, The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods, Non-monotone Behavior of the Heavy Ball Method, Finding the Nearest Positive-Real System, Proximal extrapolated gradient methods for variational inequalities, Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices, Unnamed Item, Optimization Methods for Large-Scale Machine Learning, Unnamed Item, Learning partial differential equations via data discovery and sparse optimization, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Unnamed Item, Unnamed Item, Heuristic rule for non-stationary iterated Tikhonov regularization in Banach spaces, Accelerated Residual Methods for the Iterative Solution of Systems of Equations, Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems, PCM-TV-TFV: A Novel Two-Stage Framework for Image Reconstruction from Fourier Data, Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems, Sparse PCA: Convex Relaxations, Algorithms and Applications, An accelerated homotopy perturbation iteration for nonlinear ill-posed problems in Banach spaces with uniformly convex penalty, Identifying source term in the subdiffusion equation with L 2-TV regularization *, An Efficient Inexact ABCD Method for Least Squares Semidefinite Programming, Algorithm 996, A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions, On dissipative symplectic integration with applications to gradient-based optimization, Convergence Rates of Inertial Primal-Dual Dynamical Methods for Separable Convex Optimization Problems, Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster, The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than $1/k^2$, Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints, Resource Allocation in Communication Networks with Large Number of Users: The Dual Stochastic Gradient Method, Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming, Scalable Robust Matrix Recovery: Frank--Wolfe Meets Proximal Methods, Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent, A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, The two-point gradient methods for nonlinear inverse problems based on Bregman projections, A new class of accelerated regularization methods, with application to bioluminescence tomography, Unified Acceleration of High-Order Algorithms under General Hölder Continuity, Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction, Dual Variable Inertial Accelerated Algorithm for Split System of Null Point Equality Problems, Unnamed Item, Adaptive Hamiltonian Variational Integrators and Applications to Symplectic Accelerated Optimization, Tikhonov Regularization of a Perturbed Heavy Ball System with Vanishing Damping, Unnamed Item, Unnamed Item, An Overview of Computational Sparse Models and Their Applications in Artificial Intelligence, An accelerated majorization-minimization algorithm with convergence guarantee for non-Lipschitz wavelet synthesis model *, Optimization on Spheres: Models and Proximal Algorithms with Computational Performance Comparisons, Proximal Gradient Methods for Machine Learning and Imaging, Unbiased MLMC Stochastic Gradient-Based Optimization of Bayesian Experimental Designs, Research on three-step accelerated gradient algorithm in deep learning, Relaxed inertial methods for solving the split monotone variational inclusion problem beyond co-coerciveness, Differentially private distributed logistic regression with the objective function perturbation, Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method, Learning probabilistic neural representations with randomly connected circuits, Improving “Fast Iterative Shrinkage-Thresholding Algorithm”: Faster, Smarter, and Greedier, Differentially Private Accelerated Optimization Algorithms, Column $\ell_{2,0}$-Norm Regularized Factorization Model of Low-Rank Matrix Recovery and Its Computation, Generalized relaxed inertial method with regularization for solving split feasibility problems in real Hilbert spaces, Low-rank, Orthogonally Decomposable Tensor Regression With Application to Visual Stimulus Decoding of fMRI Data, Primal–dual accelerated gradient methods with small-dimensional relaxation oracle, Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization, Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent, Active Neuron Least Squares: A Training Method for Multivariate Rectified Neural Networks, Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization, From the Ravine Method to the Nesterov Method and Vice Versa: A Dynamical System Perspective, A new minimizing-movements scheme for curves of maximal slope, Lower bounds for non-convex stochastic optimization, An optimal gradient method for smooth strongly convex minimization, A Second-Order Cone Based Approach for Solving the Trust-Region Subproblem and Its Variants, Accelerated differential inclusion for convex optimization, Convergence of iterates for first-order optimization algorithms with inertia and Hessian driven damping, On the strong convergence of the trajectories of a Tikhonov regularized second order dynamical system with asymptotically vanishing damping, New Bregman proximal type algoritms for solving DC optimization problems, Newton acceleration on manifolds identified by proximal gradient methods, Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates, Unnamed Item, An accelerated first-order method for non-convex optimization on manifolds, Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound, Multiple change points detection in high-dimensional multivariate regression, Coseparable Nonnegative Matrix Factorization, An ADMM-based algorithm for stabilizing distributed model predictive control without terminal cost and constraint, Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation, Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems, Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems, On the second-order asymptotical regularization of linear ill-posed inverse problems, Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3, Convergence rate of a relaxed inertial proximal algorithm for convex minimization, Inertial, Corrected, Primal-Dual Proximal Splitting, First-Order Methods for Nonconvex Quadratic Minimization, Open Problem—Iterative Schemes for Stochastic Optimization: Convergence Statements and Limit Theorems, MultiComposite Nonconvex Optimization for Training Deep Neural Networks, Finite Convergence of Proximal-Gradient Inertial Algorithms Combining Dry Friction with Hessian-Driven Damping, Analysis of a heuristic rule for the IRGNM in Banach spaces with convex regularization terms, Convergence analysis of a two-point gradient method for nonlinear ill-posed problems, Unnamed Item, Optimal Affine-Invariant Smooth Minimization Algorithms, A new Kaczmarz-type method and its acceleration for nonlinear ill-posed problems, Accelerated Optimization in the PDE Framework Formulations for the Active Contour Case, Convergence rate analysis of proximal gradient methods with applications to composite minimization problems, Application of a class of iterative algorithms and their accelerations to Jacobian-based linearized EIT image reconstruction, Faster response in bounded-update-rate, discrete-time linear networks using delayed self-reinforcement, On the Asymptotic Linear Convergence Speed of Anderson Acceleration, Nesterov Acceleration, and Nonlinear GMRES, Fast convergence of generalized forward-backward algorithms for structured monotone inclusions, IMRO: A Proximal Quasi-Newton Method for Solving $\ell_1$-Regularized Least Squares Problems, Ensemble Kalman inversion: a derivative-free technique for machine learning tasks, Complexity of gradient descent for multiobjective optimization, Computing Ground States of Bose--Einstein Condensates with Higher Order Interaction via a Regularized Density Function Formulation, Optimal Convergence Rates for Nesterov Acceleration, Sharpness, Restart, and Acceleration, On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming, Imaging with highly incomplete and corrupted data, Unnamed Item, l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Backtracking Strategies for Accelerated Descent Methods with Smooth Composite Objectives, Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step, Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics, On Quasi-Newton Forward-Backward Splitting: Proximal Calculus and Convergence, Adaptive FISTA for Nonconvex Optimization, Variational Image Regularization with Euler's Elastica Using a Discrete Gradient Scheme, Quantum entropic regularization of matrix-valued optimal transport, Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone, Unnamed Item, Unnamed Item, Structure Tensor Total Variation, Image Restoration with Mixed or Unknown Noises, Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions, Inexact Newton regularization combined with two-point gradient methods for nonlinear ill-posed problems *, On Solving Large-Scale Polynomial Convex Problems by Randomized First-Order Algorithms, Generalized Momentum-Based Methods: A Hamiltonian Perspective, A dual approach for optimal algorithms in distributed optimization over networks, Analysis of a generalized regularized Gauss–Newton method under heuristic rule in Banach spaces, A partially inexact ADMM with o(1/n) asymptotic convergence rate, 𝒪(1/n) complexity, and immediate relative error tolerance, Multiply Accelerated Value Iteration for NonSymmetric Affine Fixed Point Problems and Application to Markov Decision Processes, Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems, A Variational Formulation of Accelerated Optimization on Riemannian Manifolds, An Adaptive Gradient Method with Energy and Momentum, Weak and strong convergence results for the modified Noor iteration of three quasi-nonexpansive multivalued mappings in Hilbert spaces, On Degenerate Doubly Nonnegative Projection Problems, On the Generation of Sampling Schemes for Magnetic Resonance Imaging, On the Convergence Rate of Incremental Aggregated Gradient Algorithms, A Dimension Reduction Technique for Large-Scale Structured Sparse Optimization Problems with Application to Convex Clustering, High-Order Optimization Methods for Fully Composite Problems, Scaled, Inexact, and Adaptive Generalized FISTA for Strongly Convex Optimization, Bilevel Methods for Image Reconstruction, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization