Enforcing almost-sure reachability in POMDPs
From MaRDI portal
Publication:832296
DOI10.1007/978-3-030-81688-9_28zbMath1493.68213arXiv2007.00085OpenAlexW3185890095MaRDI QIDQ832296
Nils Jansen, Sanjit A. Seshia, Sebastian Junges
Publication date: 25 March 2022
Full work available at URL: https://arxiv.org/abs/2007.00085
Markov and semi-Markov decision processes (90C40) Specification and verification (program logics, model checking, etc.) (68Q60) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87)
Related Items (3)
Risk-aware shielding of partially observable Monte Carlo planning policies ⋮ Task-guided IRL in POMDPs that scales ⋮ Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Planning and acting in partially observable stochastic domains
- Optimal cost almost-sure reachability in POMDPs
- Minimal counterexamples for linear-time probabilistic verification
- Verification and control of partially observable probabilistic systems
- Deep reinforcement learning with temporal logics
- Permissive Controller Synthesis for Probabilistic Systems
- Temporal logic motion planning using POMDPs with parity objectives
- Qualitative Analysis of Partially-Observable Markov Decision Processes
- Shield Synthesis:
- A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems
- Probabilistic ω-automata
- Algorithms for Omega-Regular Games with Imperfect Information
- Omega-Regular Objectives in Model-Free Reinforcement Learning
This page was built for publication: Enforcing almost-sure reachability in POMDPs