An incremental descent method for multi-objective optimization
From MaRDI portal
Publication:5882236
DOI10.1080/10556788.2022.2124989OpenAlexW3164887315MaRDI QIDQ5882236
I. F. D. Oliveira, Ricardo H. C. Takahashi
Publication date: 15 March 2023
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.11845
Cites Work
- Convergence of the projected gradient method for quasiconvex multiobjective optimization
- Multicriteria optimization with a multiobjective golden section line search
- Lectures on convex optimization
- On the convergence of steepest descent methods for multiobjective optimization
- Linear and nonlinear programming.
- Steepest descent methods for multicriteria optimization.
- A projected gradient method for vector optimization problems
- Support-vector networks
- Accelerated diagonal steepest descent method for unconstrained multiobjective optimization
- An efficient descent method for locally Lipschitz multiobjective optimization problems
- Quasi-Newton's method for multiobjective optimization
- Proximal gradient methods for multiobjective optimization and their applications
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Newton's Method for Multiobjective Optimization
- A Wolfe Line Search Algorithm for Vector Optimization
- An Enhancement of the Bisection Method Average Performance Preserving Minmax Optimality
- Complexity of gradient descent for multiobjective optimization
This page was built for publication: An incremental descent method for multi-objective optimization