Parallel quasi-Newton methods for unconstrained optimization (Q1116897)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Parallel quasi-Newton methods for unconstrained optimization |
scientific article; zbMATH DE number 4089342
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Parallel quasi-Newton methods for unconstrained optimization |
scientific article; zbMATH DE number 4089342 |
Statements
Parallel quasi-Newton methods for unconstrained optimization (English)
0 references
1988
0 references
We discuss methods for solving the unconstrained optimization problem on parallel computers, when the number of variables is sufficiently small that quasi-Newton methods can be used. We concentrate mainly, but not exclusively, on problems where function evaluation is expensive. First we discuss ways to parallelize both the function evaluation costs and the linear algebra calculations in the standard sequential secant method, the BFGS method. Then we discuss new methods that are appropriate when there are enough processors to evaluate the function, gradient, and part but not all of the Hessian at each iteration. We develop new algorithms that utilize this information and analyze their convergence properties. We present computaional experiments showing that they are superior to parallelization either the BFGS methods or Newton's method under our assumptions on the number of processors and cost of function evaluation. Finally we discuss ways to effectively utilize the gradient values at unsuccessful trial points that are available in our parallel methods and also in some sequential software packages.
0 references
unconstrained optimization
0 references
parallel computers
0 references
quasi-Newton methods
0 references
sequential secant method
0 references
BFGS method
0 references
computaional experiments
0 references
0 references
0 references
0 references
0 references