Explanation in artificial intelligence: insights from the social sciences
From MaRDI portal
Publication:2321252
DOI10.1016/j.artint.2018.07.007zbMath1478.68274arXiv1706.07269OpenAlexW2963095307WikidataQ102363022 ScholiaQ102363022MaRDI QIDQ2321252
Publication date: 28 August 2019
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1706.07269
Related Items
A local method for identifying causal relations under Markov equivalence, Relation between prognostics predictor evaluation metrics and local interpretability SHAP values, Witnesses for Answer Sets of Logic Programs, On Tackling Explanation Redundancy in Decision Trees, Mathematical optimization in classification and regression trees, Model Uncertainty and Correctability for Directed Graphical Models, Formal Methods in FCA and Big Data, Non-monotonic explanation functions, Necessary and sufficient explanations for argumentation-based conclusions, Persuasive contrastive explanations for Bayesian networks, Probabilistic causes in Markov chains, Generating contrastive explanations for inductive logic programming based on a near miss approach, Heterogeneous causal effects with imperfect compliance: a Bayesian machine learning approach, Interpreting deep learning models with marginal attribution by conditioning on quantiles, A Comprehensive Framework for Learning Declarative Action Models, Objective-based counterfactual explanations for linear discrete optimization, Model transparency and interpretability: survey and application to the insurance industry, A \(k\)-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning, The explanation game: a formal framework for interpretable machine learning, Explainable acceptance in probabilistic and incomplete abstract argumentation frameworks, Counterfactuals as modal conditionals, and their probability, Explainable subgradient tree boosting for prescriptive analytics in operations management, Logic explained networks, Tractability of explaining classifier decisions, Explaining black-box classifiers: properties and functions, A logic of ``black box classifier systems, ASP and subset minimality: enumeration, cautious reasoning and MUSes, Is there a role for statistics in artificial intelligence?, On computing probabilistic abductive explanations, On the (complete) reasons behind decisions, The effects of explanations on automation bias, A general framework for personalising post hoc explanations through user knowledge integration, A machine learning approach to differentiate between COVID-19 and influenza infection using synthetic infection and immune response data, Synthesizing explainable counterfactual policies for algorithmic recourse with program synthesis, Using analogical proportions for explanations, A framework for inherently interpretable optimization models, Efficient search for relevance explanations using MAP-independence in Bayesian networks, Providing personalized explanations: a conversational approach, On the local coordination of fuzzy valuations, Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy, Unnamed Item, Exploiting Game Theory for Analysing Justifications, A Survey on the Explainability of Supervised Machine Learning, Some thoughts on knowledge-enhanced machine learning, Argumentative explanations for interactive recommendations, Counterfactual state explanations for reinforcement learning agents via generative deep learning, Paracoherent answer set computation, Why bad coffee? Explaining BDI agent behaviour with valuings, Unnamed Item, Unnamed Item, Editable machine learning models? A rule-based framework for user studies of explainability, The spherical \(k\)-means++ algorithm via local search, Explanation in AI and law: past, present and future, Detecting correlations and triangular arbitrage opportunities in the Forex by means of multifractal detrended cross-correlations analysis, Beneficial and harmful explanatory machine learning, Story embedding: learning distributed representations of stories based on character networks, Local and global explanations of agent behavior: integrating strategy summaries with saliency maps, The quest of parsimonious XAI: a human-agent architecture for explanation formulation, Knowledge graphs as tools for explainable machine learning: a survey, Toward an explainable machine learning model for claim frequency: a use case in car insurance pricing with telematics data, A maximum-margin multisphere approach for binary multiple instance learning, On cognitive preferences and the plausibility of rule-based models, Comments on ``Data science, big data and statistics, Learning Optimal Decision Sets and Lists with SAT, Explainable Deep Learning: A Field Guide for the Uninitiated, Defining formal explanation in classical logic by substructural derivability, SAT-based rigorous explanations for decision lists
Cites Work
- Conditional logic of actions and causation
- A theory of diagnosis from first principles
- Complexity results for structure-based causality.
- Causes and explanations in the structural-model approach: Tractable cases
- Abductive Inference
- Causes and Explanations: A Structural-Model Approach. Part I: Causes
- Causes and Explanations: A Structural-Model Approach. Part II: Explanations
- A Short Introduction to Computational Social Choice
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item