scientific article; zbMATH DE number 7626756
From MaRDI portal
Publication:5053261
Dan Alistarh, Nikoli Dryden, Tal Ben-Nun, T. Hoefler, Alexandra Peste
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/2102.00554
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Sparse Deep Neural Network for Nonlinear Partial Differential Equations, Neural network training using \(\ell_1\)-regularization and bi-fidelity data, On the landscape of one-hidden-layer sparse networks and beyond, Getting away with more network pruning: from sparsity to geometry and linear regions, A brain-inspired algorithm for training highly sparse neural networks, Approximation of functions from korobov spaces by deep convolutional neural networks
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse inverse covariance estimation with the graphical lasso
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- GitHub
- A topological insight into restricted Boltzmann machines
- The dropout learning algorithm
- A note on the complexity of \(L _{p }\) minimization
- A tutorial on Fisher information
- Pruning of basis functions in nonlinear approximators
- A simple neural network pruning algorithm with application to filter synthesis
- Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
- Blended coarse gradient descent for full quantization of deep neural networks
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Variable selection with neural networks
- Deep vs. shallow networks: An approximation theory perspective
- Stream Semantic Registers: A Lightweight RISC-V ISA Extension Achieving Full Compute Utilization in Single-Issue Cores
- 10.1162/15324430152748236
- Layer-Wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees.
- Sensitivity-Informed Provable Pruning of Neural Networks
- Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs
- Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks
- NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
- SqueezeFlow: A Sparse CNN Accelerator Exploiting Concise Convolution Rules
- An Introduction to Variational Autoencoders
- Model Selection and Estimation in Regression with Grouped Variables