Reinforcement learning-based design of sampling policies under cost constraints in Markov random fields: application to weed map reconstruction
DOI10.1016/j.csda.2013.10.002zbMath1506.62024OpenAlexW1985133095WikidataQ60558398 ScholiaQ60558398MaRDI QIDQ1623384
Régis Sabbadin, Nathalie Peyrard, Sabrina Gaba, Mathieu Bonneau
Publication date: 23 November 2018
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2013.10.002
dynamic programmingMarkov decision processsampling designGibbs samplingleast-squares linear regressionweed mapping
Computational methods for problems pertaining to statistics (62-08) Applications of statistics to environmental and related topics (62P12) Sampling theory, sample surveys (62D05) Learning and adaptive systems in artificial intelligence (68T05) Markov and semi-Markov decision processes (90C40)
Related Items (3)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimal predictive design augmentation for spatial generalised linear mixed models
- Model-based adaptive spatial sampling for occurrence map construction
- Efficiency evaluation of MEV spatial sampling strategies: a scenario analysis
- Probability aggregation methods in geoscience
- Dynamic decision making for graphical models applied to oil exploration
- Gibbs states of graphical representations of the Potts model with external fields
- Decision-theoretic optimal sampling in Hidden Markov Random Fiels
- Optimal Value of Information in Graphical Models
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- 10.1162/1532443041827907
- Bayes Factors
- Collecting Spatial Data
This page was built for publication: Reinforcement learning-based design of sampling policies under cost constraints in Markov random fields: application to weed map reconstruction