Pages that link to "Item:Q72343"
From MaRDI portal
The following pages link to Planning and acting in partially observable stochastic domains (Q72343):
Displaying 50 items.
- pomdpSolve (Q72344) (← links)
- Optimal cost almost-sure reachability in POMDPs (Q253969) (← links)
- An evidential approach to SLAM, path planning, and active exploration (Q274441) (← links)
- Goal-directed learning of features and forward models (Q280351) (← links)
- Optimal speech motor control and token-to-token variability: a Bayesian modeling approach (Q310158) (← links)
- A two-state partially observable Markov decision process with three actions (Q323443) (← links)
- A synthesis of automated planning and reinforcement learning for efficient, robust decision-making (Q334800) (← links)
- Active inference and agency: optimal control without cost functions (Q353847) (← links)
- Planning for multiple measurement channels in a continuous-state POMDP (Q360261) (← links)
- Multi-stage classifier design (Q374149) (← links)
- Testing probabilistic equivalence through reinforcement learning (Q383369) (← links)
- Exploiting symmetries for single- and multi-agent partially observable stochastic domains (Q456732) (← links)
- Markov limid processes for representing and solving renewal problems (Q475217) (← links)
- Fast strong planning for fully observable nondeterministic planning problems (Q504220) (← links)
- The value of information for populations in varying environments (Q540576) (← links)
- Exact decomposition approaches for Markov decision processes: a survey (Q606196) (← links)
- Planning in partially-observable switching-mode continuous domains (Q616766) (← links)
- Computing rank dependent utility in graphical models for sequential decision problems (Q646548) (← links)
- Decentralized MDPs with sparse interactions (Q650520) (← links)
- Optimal decision rules in repeated games where players infer an opponent's mind via simplified belief calculation (Q725019) (← links)
- Bottom-up learning of hierarchical models in a class of deterministic pomdp environments (Q747543) (← links)
- Group sparse optimization for learning predictive state representations (Q778374) (← links)
- From knowledge-based programs to graded belief-based programs. I: On-line reasoning (Q813424) (← links)
- Contingent planning under uncertainty via stochastic satisfiability (Q814473) (← links)
- Enforcing almost-sure reachability in POMDPs (Q832296) (← links)
- Tutorial series on brain-inspired computing. IV: Reinforcement learning: machine learning and natural learning (Q867508) (← links)
- Cost-sensitive feature acquisition and classification (Q869033) (← links)
- Affect control processes: intelligent affective interaction using a partially observable Markov decision process (Q901039) (← links)
- Conformant plans and beyond: principles and complexity (Q969534) (← links)
- Partially observable Markov decision process approximations for adaptive sensing (Q977009) (← links)
- Transfer in variable-reward hierarchical reinforcement learning (Q1009300) (← links)
- Partially observable Markov decision processes with imprecise parameters (Q1028935) (← links)
- A tutorial on partially observable Markov decision processes (Q1042307) (← links)
- Permissive planning: Extending classical planning to uncertain task domains. (Q1399128) (← links)
- Finite-horizon LQR controller for partially-observed Boolean dynamical systems (Q1626876) (← links)
- Autonomous agents modelling other agents: a comprehensive survey and open problems (Q1639697) (← links)
- Open problems in universal induction \& intelligence (Q1662486) (← links)
- Planning in hybrid relational MDPs (Q1699911) (← links)
- Reasoning and predicting POMDP planning complexity via covering numbers (Q1712497) (← links)
- Computation of weighted sums of rewards for concurrent MDPs (Q1731592) (← links)
- Markov decision processes with sequential sensor measurements (Q1737870) (← links)
- Task-structured probabilistic I/O automata (Q1745718) (← links)
- Probabilistic may/must testing: retaining probabilities by restricted schedulers (Q1941883) (← links)
- Policy iteration for bounded-parameter POMDPs (Q1955470) (← links)
- An affective mobile robot educator with a full-time job (Q1978443) (← links)
- Counterexample-guided inductive synthesis for probabilistic systems (Q1982641) (← links)
- Knowledge-based programs as succinct policies for partially observable domains (Q2046009) (← links)
- Partially observable environment estimation with uplift inference for reinforcement learning based recommendation (Q2071406) (← links)
- Learning and planning in partially observable environments without prior domain knowledge (Q2076979) (← links)
- Optimizing active surveillance for prostate cancer using partially observable Markov decision processes (Q2083968) (← links)