A One-Layer Recurrent Neural Network with a Discontinuous Activation Function for Linear Programming
From MaRDI portal
Publication:5387456
DOI10.1162/neco.2007.03-07-488zbMath1135.68535OpenAlexW2087204344MaRDI QIDQ5387456
Publication date: 13 May 2008
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco.2007.03-07-488
Related Items (14)
A discrete-time projection neural network for solving degenerate convex quadratic optimization ⋮ A nonnegative matrix factorization algorithm based on a discrete-time projection neural network ⋮ A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem ⋮ Neural network for constrained nonsmooth optimization using Tikhonov regularization ⋮ Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization ⋮ Sparse signal reconstruction via collaborative neurodynamic optimization ⋮ Neurodynamics-driven portfolio optimization with targeted performance criteria ⋮ A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization ⋮ A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization ⋮ A new neural network for solving quadratic programming problems with equality and inequality constraints ⋮ A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints ⋮ A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming ⋮ A neurodynamic approach to convex optimization problems with general constraint ⋮ A collective neurodynamic optimization approach to bound-constrained nonconvex optimization
Cites Work
- Dynamical behaviors of Cohen-Grossberg neural networks with discontinuous activation functions
- A deterministic annealing neural network for convex programming
- A dual neural network for convex quadratic programming subject to linear equality and inequality constraints
- A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations
- Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipschitz activations
- Analysis and design of a recurrent neural network for linear programming
- Generalized Neural Network for Nonsmooth Nonlinear Programming Problems
- New conditions for global stability of neural networks with application to linear and quadratic programming problems
- Global convergence of neural networks with discontinuous neuron activations
- Dynamical Behaviors of Delayed Neural Network Systems with Discontinuous Activation Functions
- A high performance neural network for solving nonlinear programming problems with hybrid constraints
This page was built for publication: A One-Layer Recurrent Neural Network with a Discontinuous Activation Function for Linear Programming