Interpreting deep learning models with marginal attribution by conditioning on quantiles
From MaRDI portal
Publication:2172619
DOI10.1007/S10618-022-00841-4OpenAlexW3137180271WikidataQ114859255 ScholiaQ114859255MaRDI QIDQ2172619
Mario V. Wüthrich, Michael Merz, Andreas Tsanakas, Ronald Richman
Publication date: 16 September 2022
Published in: Data Mining and Knowledge Discovery (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.11706
interactiondeep learningvariable importanceexplainable AIaccumulated local effectsattributionlocally interpretable model-agnostic explanationmodel-agnostic toolspartial dependence plotpost-hoc analysis
Related Items (2)
Lasso regularization within the LocalGLMnet architecture ⋮ LocalGLMnet: interpretable deep learning for tabular data
Uses Software
Cites Work
- Unnamed Item
- Predictive learning via rule ensembles
- Greedy function approximation: A gradient boosting machine.
- Ensembling neural networks: Many could be better than all
- Explanation in artificial intelligence: insights from the social sciences
- 10.1162/153244303322533223
- DISCRIMINATION-FREE INSURANCE PRICING
- Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
- Prediction, Estimation, and Attribution
- Estimating Quantile Sensitivities
- Random forests
This page was built for publication: Interpreting deep learning models with marginal attribution by conditioning on quantiles