A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks
From MaRDI portal
Publication:6072593
DOI10.1016/j.neunet.2022.03.010OpenAlexW4220705768MaRDI QIDQ6072593
Shubin Li, Haoen Huang, Shan Liao, Jiayong Liu, Xiuchun Xiao
Publication date: 13 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2022.03.010
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion
- Hybrid tensor decomposition in neural network compression
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition
- Robust PD-type iterative learning control for discrete systems with multiple time-delays subjected to polytopic uncertainty and restricted frequency-domain
- A parallel computing method based on zeroing neural networks for time-varying complex-valued matrix Moore-Penrose inversion
- Noise-Tolerant ZNN Models for Solving Time-Varying Zero-Finding Problems: A Control-Theoretic Approach
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
- A Stochastic Approximation Method
- Convergence of the RMSProp deep learning method with penalty for nonconvex optimization
- A fast saddle-point dynamical system approach to robust deep learning
This page was built for publication: A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks