Stronger data poisoning attacks break data sanitization defenses
From MaRDI portal
Publication:2127214
DOI10.1007/s10994-021-06119-yOpenAlexW3217417806MaRDI QIDQ2127214
Jacob Steinhardt, Pang Wei Koh, Percy Liang
Publication date: 20 April 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.00741
Related Items (2)
Random Projection and Recovery for High Dimensional Optimization with Arbitrary Outliers ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Notes about the Carathéodory number
- Some properties of the bilevel programming problem
- Practical bilevel optimization. Algorithms and applications
- The security of machine learning
- A survey of outlier detection methodologies
- Learning in the Presence of Malicious Errors
- The Power of Localization for Efficiently Learning Linear Separators with Noise
- On Agnostic Learning of Parities, Monomials, and Halfspaces
- Hardness of Learning Halfspaces with Noise
- Robust Estimators in High-Dimensions Without the Computational Intractability
- Learning from untrusted data
- Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers
- Learning geometric concepts with nasty noise
- On the learnability and design of output codes for multiclass problems
This page was built for publication: Stronger data poisoning attacks break data sanitization defenses