Verifying the generalization of deep learning to out-of-distribution domains
From MaRDI portal
Publication:6611966
DOI10.1007/s10817-024-09704-7MaRDI QIDQ6611966
Osher Maayan, Tom Zelazny, Guy Katz, Guy Amir, Michael Schapira
Publication date: 27 September 2024
Published in: Journal of Automated Reasoning (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Neural network repair with reachability analysis
- Static analysis of ReLU neural networks with tropical polyhedra
- Safety verification of deep neural networks
- Reluplex: an efficient SMT solver for verifying deep neural networks
- Bisimulations for neural network reduction
- Verification of deep convolutional neural networks using ImageStars
- An abstraction-based framework for neural network verification
- An SMT-based approach for verifying binarized neural networks
- Lagrange Multipliers and Optimality
- Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
- Shield Synthesis:
- Formal verification of neural network controlled autonomous systems
- Reachability analysis for neural feedback systems using regressive polynomial rule inference
- DeepSafe: a data-driven approach for assessing robustness of neural networks
- Formal specification for deep neural networks
- Neural Network Verification Using Residual Reasoning
- Temporal stream logic modulo theories
- Run-time optimization for learned controllers through quantitative games
- Verifying Recurrent Neural Networks Using Invariant Inference
- Towards formal XAI: formally approximate minimal explanations of neural networks
- Verifying generalization in deep learning
- Efficient neural network analysis with sum-of-infeasibilities
This page was built for publication: Verifying the generalization of deep learning to out-of-distribution domains