Adversarially Robust Kernel Smoothing
From MaRDI portal
Publication:6360812
arXiv2102.08474MaRDI QIDQ6360812
Author name not available (Why is that?)
Publication date: 16 February 2021
Abstract: We propose a scalable robust learning algorithm combining kernel smoothing and robust optimization. Our method is motivated by the convex analysis perspective of distributionally robust optimization based on probability metrics, such as the Wasserstein distance and the maximum mean discrepancy. We adapt the integral operator using supremal convolution in convex analysis to form a novel function majorant used for enforcing robustness. Our method is simple in form and applies to general loss functions and machine learning models. Exploiting a connection with optimal transport, we prove theoretical guarantees for certified robustness under distribution shift. Furthermore, we report experiments with general machine learning models, such as deep neural networks, to demonstrate competitive performance with the state-of-the-art certifiable robust learning algorithms based on the Wasserstein distance.
Has companion code repository: https://github.com/christinakouridi/arks
This page was built for publication: Adversarially Robust Kernel Smoothing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6360812)