Optimal Monte Carlo integration with fixed relative precision (Q1931425)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Optimal Monte Carlo integration with fixed relative precision |
scientific article; zbMATH DE number 6125278
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Optimal Monte Carlo integration with fixed relative precision |
scientific article; zbMATH DE number 6125278 |
Statements
Optimal Monte Carlo integration with fixed relative precision (English)
0 references
14 January 2013
0 references
The Monte Carlo (MC) algorithms are an effective approach for the approximation of integrals. In the present paper MC algorithms are used to approximate the integral \(\theta = \int_{Y} f(y) p(y) dy\), where \(p\) is a probability density, \(f\) is a real bounded function, \(0 \leq f \leq 1\) and \(Y \subseteq {\mathbb R}^{d}\). The essence of the used MC methods is to generate i.i.d. samples \(Y_{1}, \dots , Y_{n}, \dots\) from the probability density \(p\) and to use the fact that \(\theta = {\mathbb E} X_{n}\), where \(X_{n} = f(Y_{n})\). An estimator \(\widehat{\theta} = \widehat{\theta}(X_{1}, \dots, X_{n})\) is used as an \((\varepsilon, \alpha)\)-approximation of \(\theta\). In the Introduction, a brief review of the details of the \((\varepsilon, \alpha)\)-approximation problem is realized. The main aim of the paper is to show the worst case optimality of the \((\varepsilon, \alpha)\)-approximation with the stopping rule \(N_{r} = \min\{n: S_{n} \geq r \}\), \(S_{n} = \sum_{n=1}^{n} X_{n}\). In Section 1, the main definitions and assumptions are presented. The definitions of the notions of a stopping rule, an adaptively stopped MC algorithm, an \((\varepsilon, \alpha)\)-approximation, an \(\varepsilon\)-bounded relative root mean square error (\(\varepsilon\)-RRMSE) are given. The sequential estimators \(\overline{\theta}_{(r)}\) and \(\tilde{\theta}_{(r)}\) of \(\theta\) are defined and studied through the paper. In Section 2, some sequential probability inequalities are proved. The result of Theorem 2.1 is an analogue of the Chebyshev inequality for sequential estimators. The inequalities presented in Theorem 2.3 are sequential analogues of Hoeffding' s inequality. Theorem 2.3 allows to construct an \((\varepsilon, \alpha)\)-approximation and to determine the expected number of the samples. In Section 3, upper and lower bounds for the cost of the used algorithms are obtained. In Theorem 3.1, it is shown that \((N_{r}, \tilde{\theta}_{(r)})\) is an \((\varepsilon, \alpha)\)-approximation on the class of input sequence \(X_{1}, X_{2}, \dots\) and the quality \(\theta{\mathbb E} N_{r}\) is estimated. In Theorem 3.2, an arbitrary algorithm which is an \((\varepsilon, \alpha)\)-approximation on a class of inputs which are Bernoulli schemes, is considered. It is shown that \(\displaystyle \lim\inf_{\theta \to \infty} \theta{\mathbb E} N \geq Low(\varepsilon, \alpha)\), with a function \(Low(\varepsilon, \alpha)\), presented in explicit form in the paper. In Section 4, the relative mean square error (MSE) of the sequential estimator \(\overline{\theta}_{(r)}\) is examined. In Theorem 4.1, an upper bound for the MSE \({\mathbb E}\left({\overline{\theta}_{(r)} - \theta \over \theta}\right)^2\) is obtained. In Theorem 4.5, an arbitrary algorithm which is an \(\varepsilon\)-RRMSE approximation on a class of inputs which, is a Bernoulli sequence, is considered. A lower bound of \(\lim\inf_{\theta \to \infty} \theta{\mathbb E} N \) is obtained. In Section 5, the theoretical results are complemented by a numerical study. The efficiency of the \((\varepsilon, \alpha)\)-approximation and the \(\varepsilon\)-RRMSE of the developed examples is shown. In Section 6, an extension to the case of unbounded input samples is developed. In the Appendix, the necessary probabilistic results, which play a crucial role in the paper, are given.
0 references
\((\varepsilon, \alpha)\)-approximation
0 references
worst case complexity
0 references
rare event simulation
0 references
exponential inequalities
0 references
mean square error
0 references
sequential methods
0 references
stopping rule
0 references
algorithm
0 references
Chebyshev inequality
0 references
Hoeffding's inequality
0 references