Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

From MaRDI portal
Publication:6319555

arXiv1905.12202MaRDI QIDQ6319555

Author name not available (Why is that?)

Publication date: 28 May 2019

Abstract: Many recent works have shown that adversarial examples that fool classifiers can be found by minimally perturbing a normal input. Recent theoretical results, starting with Gilmer et al. (2018b), show that if the inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable. A concentrated space has the property that any subset with Omega(1) (e.g., 1/100) measure, according to the imposed distribution, has small distance to almost all (e.g., 99/100) of the points in the space. It is not clear, however, whether these theoretical results apply to actual distributions such as images. This paper presents a method for empirically measuring and bounding the concentration of a concrete dataset which is proven to converge to the actual concentration. We use it to empirically estimate the intrinsic robustness to ellinfty and ell2 perturbations of several image classification benchmarks. Code for our experiments is available at https://github.com/xiaozhanguva/Measure-Concentration.




Has companion code repository: https://github.com/xiaozhanguva/Measure-Concentration








This page was built for publication: Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319555)