On high-order model regularization for multiobjective optimization
From MaRDI portal
Publication:5038176
DOI10.1080/10556788.2020.1719408zbMath1501.90088OpenAlexW3005116386MaRDI QIDQ5038176
No author found.
Publication date: 29 September 2022
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2020.1719408
Abstract computational complexity for mathematical programming problems (90C60) Multi-objective and goal programming (90C29)
Related Items
Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization, Convergence rates analysis of a multiobjective proximal gradient method, Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
Uses Software
Cites Work
- Unnamed Item
- An inexact restoration approach to optimization problems with multiobjective constraints under weighted-sum scalarization
- Nonmonotone algorithm for minimization on closed sets with applications to minimization on Stiefel manifolds
- A new scalarization technique to approximate Pareto fronts of problems with disconnected feasible sets
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Optimization over the efficient set of multi-objective convex optimal control problems
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Density-based globally convergent trust-region methods for self-consistent field electronic structure calculations
- Multiobjective optimization. Interactive and evolutionary approaches
- Existence theorems in vector optimization
- Nonlinear multiobjective optimization
- Steepest descent methods for multicriteria optimization.
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Scalarizing vector optimization problems
- A projected gradient method for vector optimization problems
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Cubic regularization of Newton method and its global performance
- Proper efficiency and the theory of vector maximization
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- On the convergence of the projected gradient method for vector optimization
- A new scalarization and numerical method for constructing the weak Pareto front of multi-objective optimization problems
- Adaptive Scalarization Methods in Multiobjective Optimization
- Newton's Method for Multiobjective Optimization
- Testing Unconstrained Optimization Software
- On High-order Model Regularization for Constrained Optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Nonlinear Conjugate Gradient Methods for Vector Optimization
- Introduction to Shape Optimization
- A quadratically convergent Newton method for vector optimization
- Complexity of gradient descent for multiobjective optimization
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- Optimization over the efficient set