Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples
From MaRDI portal
Publication:2105108
DOI10.1016/j.acha.2022.08.002OpenAlexW3124853521MaRDI QIDQ2105108
Michael Rauchensteiner, Massimo Fornasier, Timo Klock, Christian Fiedler
Publication date: 8 December 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2101.07150
framesdeparametrizationdeep neural networksactive samplingexact finite sample identifiabilitynonconvex optimization on matrix spaces
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning functions of few arbitrary linear parameters in high dimensions
- Entropy and sampling numbers of classes of ridge functions
- On the complexity of loading shallow neural networks
- Approximation by Ridge functions and neural networks with one hidden layer
- Identifying linear combinations of ridge functions
- Reconstructing a neural net from its output
- Provable approximation properties for deep neural networks
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing
- Nonlinear approximation and (deep) ReLU networks
- Neural network identifiability for a family of sigmoidal nonlinearities
- Robust and resource-efficient identification of two hidden layer neural networks
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Function approximation by deep networks
- Affine symmetries and neural network identifiability
- Deep vs. shallow networks: An approximation theory perspective
- Robust principal component analysis?
- Approximation by Ridge Functions and Neural Networks
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- High-Dimensional Probability
- Memory Capacity of Neural Networks with Threshold and Rectified Linear Unit Activations
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations
- Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
- Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
- Robust and resource efficient identification of shallow neural networks by fewest samples
- Understanding Machine Learning
- Perturbation bounds in connection with singular value decomposition
- Neural network approximation
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
This page was built for publication: Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples