Rationalizing predictions by adversarial information calibration
From MaRDI portal
Publication:2680803
DOI10.1016/j.artint.2022.103828OpenAlexW4309634482MaRDI QIDQ2680803
Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz
Publication date: 4 January 2023
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.08884
natural language processinginterpretabilitydeep neural networksinformation calibrationrationale extraction
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Predictive learning via rule ensembles
- Greedy function approximation: A gradient boosting machine.
- Planning and acting in partially observable stochastic domains
- Image interpretation with a conceptual graph: labeling over-segmented images and detection of unexpected objects
- Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model
- Simple statistical gradient-following algorithms for connectionist reinforcement learning
- Explanation in AI and law: past, present and future
- Very simple classification rules perform well on most commonly used datasets
- Control of perceptual attention in robot driving
- Foundations of Rule Learning
- Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression
- 10.1162/153244303322533223
- Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
- Learning representations by back-propagating errors
This page was built for publication: Rationalizing predictions by adversarial information calibration