Lossless compression of deep neural networks
From MaRDI portal
Publication:2110432
DOI10.1007/978-3-030-58942-4_27OpenAlexW3089719735MaRDI QIDQ2110432
Srikumar Ramalingam, Abhinav Kumar, Thiago Serra
Publication date: 21 December 2022
Full work available at URL: https://arxiv.org/abs/2001.00218
mixed-integer linear programmingdeep learningneural network pruningneuron stabilityrectified linear unit
Combinatorial optimization (90C27) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Operations research and management science (90Bxx)
Related Items
Training thinner and deeper neural networks: jumpstart regularization ⋮ Principled deep neural network training through linear programming ⋮ Getting away with more network pruning: from sparsity to geometry and linear regions ⋮ OAMIP: optimizing ANN architectures using mixed-integer programming ⋮ Towards Lower Bounds on the Depth of ReLU Neural Networks ⋮ Optimization of sparsity-constrained neural networks as a mixed integer linear program ⋮ Deep Neural Networks Pruning via the Structured Perspective Regularization ⋮ Parameter identifiability of a deep feedforward ReLU neural network ⋮ Between steps: intermediate relaxations between big-M and convex hull formulations
This page was built for publication: Lossless compression of deep neural networks