Finding MAPs for belief networks is NP-hard
From MaRDI portal
Publication:1332845
DOI10.1016/0004-3702(94)90072-8zbMath0818.68097OpenAlexW2130178369WikidataQ57518815 ScholiaQ57518815MaRDI QIDQ1332845
Publication date: 22 August 1995
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0004-3702(94)90072-8
Related Items (42)
The Bayesian ontology language \(\mathcal {BEL}\) ⋮ Finding MAPs for belief networks is NP-hard ⋮ On some properties of the low-dimensional Gumbel perturbations in the perturb-and-MAP model ⋮ Approximate belief updating in max-2-connected Bayes networks is NP-hard ⋮ Multi-classifiers of Small Treewidth ⋮ Cycle-based cluster variational method for direct and inverse inference ⋮ Understanding the role of noise in stochastic local search: analysis and experiments ⋮ Exploiting case-based independence for approximating marginal probabilities ⋮ Monitoring of perception systems: deterministic, probabilistic, and learning-based fault detection and identification ⋮ Implicitly preserving semantics during incremental knowledge base acquisition under uncertainty. ⋮ Portfolios in stochastic local search: efficiently computing most probable explanations in Bayesian networks ⋮ On Imperfect Recall in Multi-Agent Influence Diagrams ⋮ Most probable explanations in Bayesian networks: complexity and tractability ⋮ A computational-level explanation of the speed of goal inference ⋮ Rational analysis, intractability, and the prospects of `as if'-explanations ⋮ Cost-based temporal reasoning ⋮ Multi-dimensional classification with Bayesian networks ⋮ A review on evolutionary algorithms in Bayesian network learning and inference tasks ⋮ Controlled generation of hard and easy Bayesian networks: Impact on maximal clique size in tree clustering ⋮ Coherence measures and inference to the best explanation ⋮ Conditional random fields for pattern recognition applied to structured data ⋮ Use of Explanation Trees to Describe the State Space of a Probabilistic-Based Abduction Problem ⋮ An algorithm for finding MAPs for belief networks through cost-based abduction ⋮ Bayesian model-based diagnosis ⋮ A framework for building knowledge-bases under uncertainty ⋮ Understanding the scalability of Bayesian network inference using clique tree growth curves ⋮ Variable neighborhood search for graphical model energy minimization ⋮ Most frugal explanations in Bayesian networks ⋮ The Complexity of Finding kth Most Probable Explanations in Probabilistic Networks ⋮ The role of relevance in explanation. II: Disjunctive assignments and approximate independence ⋮ Hybrid algorithms for approximate belief updating in Bayes nets ⋮ Approximating MAPs for belief networks is NP-hard and other theorems ⋮ Finding the maximum a posteriori probability (MAP) in a Bayesian taxonomic key is NP-hard ⋮ DIRECTING GENETIC ALGORITHMS FOR PROBABILISTIC REASONING THROUGH REINFORCEMENT LEARNING ⋮ The complexity of approximating MAPs for belief networks with bounded probabilities ⋮ Unnamed Item ⋮ Complexity results for structure-based causality. ⋮ Unnamed Item ⋮ Evaluation of Bayesian networks with flexible state-space abstraction methods ⋮ Energy distribution view for monotonic dual decomposition ⋮ Complexity of probabilistic reasoning in directed-path singly-connected Bayes networks ⋮ Complexity results for explanations in the structural-model approach
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Approximating probabilistic inference in Bayesian belief networks is NP- hard
- A linear constraint satisfaction approach to cost-based abduction
- Finding MAPs for belief networks is NP-hard
- The computational complexity of probabilistic inference using Bayesian belief networks
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
This page was built for publication: Finding MAPs for belief networks is NP-hard