Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

From MaRDI portal
Publication:6328309

arXiv1910.14667MaRDI QIDQ6328309

Author name not available (Why is that?)

Publication date: 31 October 2019

Abstract: We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.




Has companion code repository: https://github.com/anonymous1125/patnet_dataset








This page was built for publication: Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6328309)