Under-approximating expected total rewards in POMDPs
From MaRDI portal
Publication:6535603
DOI10.1007/978-3-030-99527-0_2zbMATH Open1547.68413MaRDI QIDQ6535603
Author name not available (Why is that?), Joost-Pieter Katoen, Tim Quatmann
Publication date: 1 February 2024
Markov and semi-Markov decision processes (90C40) Specification and verification (program logics, model checking, etc.) (68Q60)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Planning and acting in partially observable stochastic domains
- Optimal cost almost-sure reachability in POMDPs
- On the undecidability of probabilistic planning and related stochastic optimization problems
- Enforcing almost-sure reachability in POMDPs
- Partially observable Markov decision processes with imprecise parameters
- Verification and control of partially observable probabilistic systems
- Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
- Bounded model checking for probabilistic programs
- Optimal control of Markov processes with incomplete state information
- Verification of Markov Decision Processes Using Learning Algorithms
- Qualitative Analysis of Partially-Observable Markov Decision Processes
- The Optimal Search for a Moving Target When the Search Path Is Constrained
- State of the Art—A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms
- Computationally Feasible Bounds for Partially Observed Markov Decision Processes
- The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs
- The Optimal Control of Partially Observable Markov Processes over a Finite Horizon
- Sound value iteration
- Continuous-time Markov decisions based on partial exploration
- Verification of Indefinite-Horizon POMDPs
Related Items (1)
This page was built for publication: Under-approximating expected total rewards in POMDPs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6535603)