Extended Newton Methods for Multiobjective Optimization: Majorizing Function Technique and Convergence Analysis
From MaRDI portal
Publication:5234284
DOI10.1137/18M1191737zbMath1422.90052WikidataQ127255335 ScholiaQ127255335MaRDI QIDQ5234284
Chong Li, Carisa Kwok Wai Yu, Yao-Hua Hu, Xiao Qi Yang, Jin-Hua Wang
Publication date: 26 September 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
convergence criteriamultiobjective optimizationNewton methodPareto optimum\(L\)-average Lipschitz condition
Numerical mathematical programming methods (65K05) Multi-objective and goal programming (90C29) Nonlinear programming (90C30)
Related Items (16)
On semilocal convergence analysis for two-step Newton method under generalized Lipschitz conditions in Banach spaces ⋮ A nonmonotone gradient method for constrained multiobjective optimization problems ⋮ A quasi-Newton method with Wolfe line searches for multiobjective optimization ⋮ Augmented Lagrangian cone method for multiobjective optimization problems with an application to an optimal control problem ⋮ Convergence of inexact steepest descent algorithm for multiobjective optimizations on Riemannian manifolds without curvature constraints ⋮ Conditional gradient method for vector optimization ⋮ An infeasible interior-point technique to generate the nondominated set for multiobjective optimization problems ⋮ Memory gradient method for multiobjective optimization ⋮ Accelerated diagonal steepest descent method for unconstrained multiobjective optimization ⋮ An efficient descent method for locally Lipschitz multiobjective optimization problems ⋮ A projected subgradient method for nondifferentiable quasiconvex multiobjective optimization problems ⋮ Convergence of a nonmonotone projected gradient method for nonconvex multiobjective optimization ⋮ Globally convergent Newton-type methods for multiobjective optimization ⋮ Combined gradient methods for multiobjective optimization ⋮ A superlinearly convergent nonmonotone quasi-Newton method for unconstrained multiobjective optimization ⋮ Linear convergence of a nonmonotone projected gradient method for multiobjective optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Newton's method for sections on Riemannian manifolds: Generalized covariant \(\alpha \)-theory
- Multiobjective optimization. Interactive and evolutionary approaches
- Optimization. Algorithms and consistent approximations
- Steepest descent methods for multicriteria optimization.
- Introductory lectures on convex optimization. A basic course.
- Kantorovich's theorem on Newton's method in Riemannian manifolds
- A Newton-like method for variable order vector optimization problems
- A steepest descent method for vector optimization
- A projected gradient method for vector optimization problems
- Vector optimization. Set-valued and variational analysis.
- Minimization of functions having Lipschitz continuous first partial derivatives
- Convergence of the steepest descent method for minimizing quasiconvex functions
- Convergence of the Gauss--Newton Method for Convex Composite Optimization under a Majorant Condition
- A Multiobjective Branch-and-Bound Framework: Application to the Biobjective Spanning Tree Problem
- Generalized proximal point algorithms for multiobjective optimization problems
- Advances in Cone-Based Preference Modeling for Decision Making with Multiple Criteria
- Vector Optimization
- Ridge regression in two-parameter solution
- Adaptive Scalarization Methods in Multiobjective Optimization
- Majorizing Functions and Convergence of the Gauss–Newton Method for Convex Composite Optimization
- Newton's Method for Multiobjective Optimization
- Simultaneous Minimization of Mean and Variation of Flow Time and Waiting Time in Single Machine Systems
- Convergence of Newton’s method and inverse function theorem in Banach space
- Numerical Optimization
- Newton's method on Riemannian manifolds: covariant alpha theory
- Generalized Lexicographic MultiObjective Combinatorial Optimization. Application to Cryptography
- Historical Development of the Newton–Raphson Method
- Convergence of Newton's method and uniqueness of the solution of equations in Banach space
- Iterative Solution of Nonlinear Equations in Several Variables
- A Derivative-Free Trust-Region Method for Biobjective Optimization
- A quadratically convergent Newton method for vector optimization
- A New Scalarization Technique and New Algorithms to Generate Pareto Fronts
- Proximal Methods in Vector Optimization
- Local convergence of Newton's method in Banach space from the viewpoint of the majorant principle
- An Efficient Interior-Point Method for Convex Multicriteria Optimization Problems
- Kantorovich's Theorem on Newton's Method for Solving Strongly Regular Generalized Equation
- Logistic regression, AdaBoost and Bregman distances
This page was built for publication: Extended Newton Methods for Multiobjective Optimization: Majorizing Function Technique and Convergence Analysis