Robust Adversarial Learning via Sparsifying Front Ends

From MaRDI portal
Publication:6308739

arXiv1810.10625MaRDI QIDQ6308739

Author name not available (Why is that?)

Publication date: 24 October 2018

Abstract: It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks. In this paper, we take a bottom-up signal processing perspective to this problem and show that a systematic exploitation of sparsity in natural data is a promising tool for defense. For linear classifiers, we show that a sparsifying front end is provably effective against ellinfty-bounded attacks, reducing output distortion due to the attack by a factor of roughly K/N where N is the data dimension and K is the sparsity level. We then extend this concept to deep networks, showing that a "locally linear" model can be used to develop a theoretical foundation for crafting attacks and defenses. We also devise attacks based on the locally linear model that outperform the well-known FGSM attack. We supplement our theoretical results with experiments on the MNIST and CIFAR-10 datasets, showing the efficacy of the proposed sparsity-based defense schemes.




Has companion code repository: https://github.com/soorya19/sparsity-based-defenses








This page was built for publication: Robust Adversarial Learning via Sparsifying Front Ends

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6308739)