Zeroth-Order Topological Insights into Iterative Magnitude Pruning
From MaRDI portal
Publication:6401992
arXiv2206.06563MaRDI QIDQ6401992
Aishwarya Balwani, Jakob Krzyston
Publication date: 13 June 2022
Abstract: Modern-day neural networks are famously large, yet also highly redundant and compressible; there exist numerous pruning strategies in the deep learning literature that yield over 90% sparser sub-networks of fully-trained, dense architectures while still maintaining their original accuracies. Amongst these many methods though -- thanks to its conceptual simplicity, ease of implementation, and efficacy -- Iterative Magnitude Pruning (IMP) dominates in practice and is the de facto baseline to beat in the pruning community. However, theoretical explanations as to why a simplistic method such as IMP works at all are few and limited. In this work, we leverage the notion of persistent homology to gain insights into the workings of IMP and show that it inherently encourages retention of those weights which preserve topological information in a trained network. Subsequently, we also provide bounds on how much different networks can be pruned while perfectly preserving their zeroth order topological features, and present a modified version of IMP to do the same.
Has companion code repository: https://github.com/AishwaryaHB/aishwaryahb.github.io
This page was built for publication: Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401992)