Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
From MaRDI portal
Publication:6149503
DOI10.1016/J.INS.2021.11.044OpenAlexW3217439888MaRDI QIDQ6149503
Qian Kang, Jacek M. Zurada, Qinwei Fan
Publication date: 5 February 2024
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ins.2021.11.044
Analysis of algorithms (68W40) Artificial neural networks and deep learning (68T07) Methods of reduced gradient type (90C52)
Cites Work
- Unnamed Item
- Unnamed Item
- Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
- Convergence analysis of online gradient method for BP neural networks
- An online gradient method with momentum for two-layer feedforward neural networks
- Training multilayer perceptrons via minimization of sum of ridge functions
- Multilayer feedforward networks are universal approximators
- Dynamic properties and a new learning mechanism in higher order neural networks
- Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
- Gradient Convergence in Gradient methods with Errors
This page was built for publication: Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions