Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods
DOI10.1162/neco_a_01329zbMath1497.68180OpenAlexW3093755763WikidataQ100745205 ScholiaQ100745205MaRDI QIDQ3386436
Bruno A. Olshausen, E. Paxon Frady, Spencer J. Kent, Friedrich T. Sommer
Publication date: 4 January 2021
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_01329
Artificial neural networks and deep learning (68T07) Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) Networks and circuits as models of computation; circuit complexity (68Q06)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor Decompositions and Applications
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Fast projection onto the simplex and the \(l_1\) ball
- Linear convergence of iterative soft-thresholding
- ``Neural computation of decisions in optimization problems
- CANDELINC: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters
- Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Tensor decompositions for learning latent variable models
- Neural Network Design for Efficient Information Retrieval
- Absolute stability of global pattern formation and parallel memory storage by competitive neural networks
- Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures
- Learning to Represent Spatial Transformations with Factored Higher-Order Boltzmann Machines
- Introduction to Stochastic Search and Optimization
- A Multilinear Singular Value Decomposition
- On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors
- Tensor Decomposition for Signal Processing and Machine Learning
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Validation of subgradient optimization
- Neural networks and physical systems with emergent collective computational abilities.
- Neurons with graded response have collective computational properties like those of two-state neurons.
- Most Tensor Problems Are NP-Hard
This page was built for publication: Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods