A No-Go Theorem for One-Layer Feedforward Networks
From MaRDI portal
Publication:5383800
DOI10.1162/NECO_a_00657zbMath1416.92012arXiv1310.3796OpenAlexW1977368668WikidataQ48589737 ScholiaQ48589737MaRDI QIDQ5383800
Publication date: 20 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1310.3796
Neural networks for/in biological studies, artificial life and related topics (92B20) Combinatorial codes (94B25)
Related Items (16)
Nondegenerate Neural Codes and Obstructions to Closed-Convexity ⋮ Non-monotonicity of closed convexity in neural codes ⋮ Signless normalized Laplacian for hypergraphs ⋮ What can topology tell us about the neural code? ⋮ Neural codes with three maximal codewords: convexity and minimal embedding dimension ⋮ Minimal embedding dimensions of connected neural codes ⋮ Error-correcting codes and neural networks ⋮ Algebraic signatures of convex and non-convex codes ⋮ Wheels: a new criterion for non-convexity of neural codes ⋮ Neural Ring Homomorphisms and Maps Between Neural Codes ⋮ Hyperplane Neural Codes and the Polar Complex ⋮ On open and closed convex codes ⋮ Neural Codes, Decidability, and a New Local Obstruction to Convexity ⋮ Obstructions to convexity in neural codes ⋮ What Makes a Neural Code Convex? ⋮ Morphisms of Neural Codes
Cites Work
- Unnamed Item
- Unnamed Item
- The neural ring: an algebraic tool for analyzing the intrinsic structure of neural codes
- Nerves of simplicial complexes
- Intersection Patterns of Convex Sets via Simplicial Complexes: A Survey
- On Dedekind's Problem: The Number of Isotone Boolean Functions. II
- Approximation by superpositions of a sigmoidal function
This page was built for publication: A No-Go Theorem for One-Layer Feedforward Networks