Minimization algorithms based on supervisor and searcher cooperation
From MaRDI portal
Publication:5956435
DOI10.1023/A:1011986402461zbMath1032.90020MaRDI QIDQ5956435
Publication date: 2001
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Related Items (13)
Stable equilibrium configuration of two bar truss by an efficient nonmonotone global Barzilai-Borwein gradient method in a fuzzy environment ⋮ Novel algorithms for noisy minimization problems with applications to neural networks training ⋮ LMBOPT: a limited memory method for bound-constrained optimization ⋮ Adaptive finite element methods for the identification of elastic constants ⋮ The Uzawa-MBB type algorithm for nonsymmetric saddle point problems ⋮ Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks ⋮ Prediction-correction method with BB step sizes ⋮ Fast methods for computing centroidal Laguerre tessellations for prescribed volume fractions with applications to microstructure generation of polycrystalline materials ⋮ A new spectral method for \(l_1\)-regularized minimization ⋮ On the asymptotic behaviour of some new gradient methods ⋮ Alternate step gradient method* ⋮ A descent algorithm without line search for unconstrained optimization ⋮ Inertial projection and contraction algorithms with larger step sizes for solving quasimonotone variational inequalities
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A stochastic quasigradient algorithm with variable metric
- Stochastic approximation methods for constrained and unconstrained systems
- Optimization algorithm with probabilistic estimation
- Recent progress in unconstrained nonlinear optimization without derivatives
- Optimization via simulation: A review
- A method of trust region type for minimizing noisy functions
- R-linear convergence of the Barzilai and Borwein gradient method
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- Two-Point Step Size Gradient Methods
- Testing Unconstrained Optimization Software
- Nelder-Mead Simplex Modifications for Simulation Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- On the Barzilai and Borwein choice of steplength for the gradient method
- Stochastic Estimation of the Maximum of a Regression Function
- A Stochastic Approximation Method
This page was built for publication: Minimization algorithms based on supervisor and searcher cooperation