Competitive Layer Model of Discrete-Time Recurrent Neural Networks with LT Neurons
From MaRDI portal
Publication:3583499
DOI10.1162/NECO_A_00004-ZHOUzbMath1195.68085DBLPjournals/neco/ZhouZ10OpenAlexW1994173832WikidataQ51700555 ScholiaQ51700555MaRDI QIDQ3583499
Publication date: 17 August 2010
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00004-zhou
Learning and adaptive systems in artificial intelligence (68T05) Computing methodologies for image processing (68U10) Pattern recognition, speech recognition (68T10)
Related Items (2)
A competitive layer model for cellular neural networks ⋮ Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
Cites Work
- A Competitive-Layer Model for Feature Binding and Sensory Segmentation
- Lyapunov Functions for Neural Nets with Nondifferentiable Input-Output Characteristics
- Matrix Analysis
- Selectively Grouping Neurons in Recurrent Networks of Lateral Inhibition
- Sufficient and necessary conditions for global exponential stability of discrete-time recurrent neural networks
- Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
- Multistability Analysis for Recurrent Neural Networks with Unsaturating Piecewise Linear Transfer Functions
- Activity Invariant Sets and Exponentially Stable Attractors of Linear Threshold Discrete-Time Recurrent Neural Networks
- Global stability of a class of discrete-time recurrent neural networks
- Analysis of Cyclic Dynamics for Networks of Linear Threshold Neurons
This page was built for publication: Competitive Layer Model of Discrete-Time Recurrent Neural Networks with LT Neurons