Dynamic importance sampling for uniformly recurrent Markov chains
From MaRDI portal
Publication:1774208
DOI10.1214/105051604000001016zbMath1068.60036arXivmath/0503454OpenAlexW2031669371MaRDI QIDQ1774208
Publication date: 29 April 2005
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0503454
Related Items
The Convergence Rate and Asymptotic Distribution of the Bootstrap Quantile Variance Estimator for Importance Sampling ⋮ Some Recent Results in Rare Event Estimation ⋮ The cross-entropy method with patching for rare-event simulation of large Markov chains ⋮ Numerical and statistical methods for the coarse-graining of many-particle stochastic systems ⋮ State-dependent importance sampling for estimating expectations of functionals of sums of independent random variables ⋮ Slip Rates and Slip Modes in an Actively Mode-Locked Laser ⋮ Efficient simulation and conditional functional limit theorems for ruinous heavy-tailed random walks ⋮ Optimal importance sampling with explicit formulas in continuous time ⋮ A sequential Monte Carlo approach to computing tail probabilities in stochastic models ⋮ Efficient importance sampling for Monte Carlo evaluation of exceedance probabilities ⋮ A note on random walks with absorbing barriers and sequential Monte Carlo methods ⋮ Dynamic importance sampling for queueing networks ⋮ Adaptive sampling of large deviations ⋮ Quantitative Differentiation: A General Formulation ⋮ Rare Event Simulation of Small Noise Diffusions ⋮ Approximation of bounds on mixed-level orthogonal arrays ⋮ Estimation of extreme quantiles in a simulation model ⋮ Asymptotically optimal importance sampling for Jackson networks with a tree topology ⋮ Importance sampling for a Markov modulated queuing network ⋮ Splitting for rare event simulation: A large deviation approach to design and analysis ⋮ Rare event simulation for processes generated via stochastic fixed point equations ⋮ The Valuation of American Options with Stochastic Stopping Time Constraints
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Markov additive processes. I: Eigenvalue properties and limit theorems
- Markov additive processes. II: Large deviations
- Large deviations for empirical measures of Markov chains
- Conjugate processes and the simulation of ruin problems
- Large deviations of uniformly recurrent Markov additive processes
- Stochastic optimal control. The discrete time case
- Importance sampling in the Monte Carlo study of sequential tests
- Weak convergence methods and singularly perturbed stochastic control and filtering problems
- Counterexamples in importance sampling for large deviations probabilities
- On Monte Carlo estimation of large deviations probabilities
- Importance sampling techniques for the multidimensional ruin problem for general Markov additive sequences of random vectors
- Large deviations and rare events in the study of stochastic algorithms
- Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains
- On large deviations theory and asymptotically efficient Monte Carlo estimation
- Stochastic Systems with Small Noise, Analysis and Simulation; A Phase Locked Loop Example
- Large deviations theory and efficient simulation of excessive backlogs in a GI/GI/m queue
- Simulating level-crossing probabilities by importance sampling
- On asymptotically efficient simulation of ruin probabilities in a Markovian environment
- On the optimality and stability of exponential twisting in Monte Carlo estimation
- Importance Sampling for the Simulation of Highly Reliable Markovian Systems
- Regenerative rare events simulation via likelihood ratios
- Importance Sampling, Large Deviations, and Differential Games
- The blind simulation problem and regenerative processes
- Risk theory in a Markovian environment
- Analysis of an importance sampling estimator for tandem queues
- Fast simulation of rare events in queueing and reliability models
- Convex Analysis
This page was built for publication: Dynamic importance sampling for uniformly recurrent Markov chains