ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH
DOI10.1142/S0218127406015805zbMath1156.92004OpenAlexW2099997745MaRDI QIDQ3598837
George D. Magoulas, Michael N. Vrahatis
Publication date: 3 February 2009
Published in: International Journal of Bifurcation and Chaos (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0218127406015805
unconstrained optimizationadaptive learningfeedforward neural networksJacobi processessupervised trainingbackpropagation training
Applications of mathematical programming (90C90) Nonlinear programming (90C30) Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Related Items (5)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Parallel evolutionary training algorithms for ``hardware-friendly`` neural networks
- Cauchy's method of minimization
- Globally convergent modification of the quickprop method
- From linear to nonlinear iterative methods
- Adaptive stepsize algorithms for on-line training of neural networks.
- A new unconstrained optimization method for imprecise function and gradient values
- A class of gradient unconstrained minimization algorithms with adaptive stepsize
- Minimization of functions having Lipschitz continuous first partial derivatives
- Algorithm 666: Chabis: a mathematical software package for locating and evaluating roots of systems of nonlinear equations
- Solving systems of nonlinear equations using the nonzero value of the topological degree
- Quasi-Newton Methods, Motivation and Theory
- On Langevin Updating in Multilayer Perceptrons
- Geometry of learning: Visualizing the performance of neural network supervised training methods
- Ill-Conditioning in Neural Network Training Problems
- Iterative Solution Methods
- Iterative Solution of Nonlinear Equations in Several Variables
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- Rates of Convergence for a Class of Iterative Procedures
- AN APPLICATION OF THE METHOD OF STEEPEST DESCENTS TO THE SOLUTION OF SYSTEMS OF NON-LINEAR SIMULTANEOUS EQUATIONS
- Matrix Iterative Analysis
This page was built for publication: ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH