Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming
From MaRDI portal
Publication:6326896
arXiv1910.04249MaRDI QIDQ6326896
Author name not available (Why is that?)
Publication date: 9 October 2019
Abstract: Quantifying the robustness of neural networks or verifying their safety properties against input uncertainties or adversarial attacks have become an important research area in learning-enabled systems. Most results concentrate around the worst-case scenario where the input of the neural network is perturbed within a norm-bounded uncertainty set. In this paper, we consider a probabilistic setting in which the uncertainty is random with known first two moments. In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output. Due to the presence of nonlinear activation functions, these two problems are very difficult to solve exactly. To simplify the analysis, our main idea is to abstract the nonlinear activation functions by a combination of affine and quadratic constraints they impose on their input-output pairs. We then show that the safety of the abstracted network, which is sufficient for the safety of the original network, can be analyzed using semidefinite programming. We illustrate the performance of our approach with numerical experiments.
Has companion code repository: https://github.com/mahyarfazlyab/DeepSDP
This page was built for publication: Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6326896)