Discrete-time Zhang neural networks for time-varying nonlinear optimization
From MaRDI portal
Publication:2296503
DOI10.1155/2019/4745759zbMath1453.90175OpenAlexW2937785389MaRDI QIDQ2296503
Min Sun, Maoying Tian, Y. J. Wang
Publication date: 18 February 2020
Published in: Discrete Dynamics in Nature and Society (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1155/2019/4745759
Related Items
Noise-tolerant continuous-time Zhang neural networks for time-varying Sylvester tensor equations, Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach, On the Hybridization of the Double Step Length Method for Solving System of Nonlinear Equations, Modification of the double direction approach for solving systems of nonlinear equations with application to Chandrasekhar’s Integral equation, A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem, General six-step discrete-time Zhang neural network for time-varying tensor absolute value equations, On solving double direction methods for convex constrained monotone nonlinear equations with image restoration, General five-step discrete-time Zhang neural network for time-varying nonlinear optimization, A novel noise-tolerant Zhang neural network for time-varying Lyapunov equation, From Penrose equations to Zhang neural network, Getz-Marsden dynamic system, and DDD (direct derivative dynamics) using substitution technique
Uses Software
Cites Work
- Unnamed Item
- New hybrid conjugate gradient projection method for the convex constrained equations
- A recurrent neural network for solving a class of generalized convex optimization problems
- A modified Hestenes-Stiefel projection method for constrained nonlinear equations and its linear convergence rate
- Design and analysis of two discrete-time ZD algorithms for time-varying nonlinear minimization
- General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization
- ZFD formula \(4\mathrm{I}g\mathrm{SFD}\_\mathrm{Y}\) applied to future minimization
- A new descent memory gradient method and its global convergence
- Neural network for nonsmooth pseudoconvex optimization with general convex constraints
- Generalized Peaceman-Rachford splitting method for multiple-block separable convex programming with applications to robust PCA
- Taylor-type 1-step-ahead numerical differentiation rule for first-order derivative approximation and ZNN discretization
- Updating Quasi-Newton Matrices with Limited Storage
- Numerical Methods for Ordinary Differential Equations
- SOR-like methods for augmented systems