Small perturbations of Gaussian regressors (Q1897089)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Small perturbations of Gaussian regressors |
scientific article; zbMATH DE number 796385
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Small perturbations of Gaussian regressors |
scientific article; zbMATH DE number 796385 |
Statements
Small perturbations of Gaussian regressors (English)
0 references
13 February 1996
0 references
The aim of this paper is to describe the way the conditional expectation \(E(X\mid Y)\) given a Gaussian stochastic process (or more generally a Banach-valued Gaussian random vector) is perturbed by adding to the observation \(Y\) an independent Gaussian process (random vector). This may arise quite naturally when one wants to estimate \(X\) in terms of a Gaussian information which is blurred by a small independent ``noise''. Such expressions also appeared in the study of the characterization of a mean-square integrable process by the first two conditional moments. In Section 2 the results are formulated in terms of comparison of the covariances of \(Y\) and \(Z\), which is the most natural for stochastic processes, or in terms of the reproducing kernel Hilbert spaces (RKHS) of the Banach-valued random vectors \(Y\) and \(Z\). Section 3 contains a simple proof of the convergence under conditions ensuring that the noise \(Z\) disappears from the limit. The statement and argument do not use any specific results concerning infinite-dimensional Gaussian random vectors. The theorems proved directly in this section are corollaries of the more general convergence theorem (Theorem 1) discussed in Section 2. In order to obtain a complete description of the limit, we use the RKHS of Gaussian random vectors (or random processes) in Section 4. One key result is contained in Proposition 12 which gives necessary and sufficient conditions for the equality \({\mathcal F} (Y+ Z)= {\mathcal F} (Y, Z)\). The convergence theorems in Sections 3 and 4 depend on a one- dimensional analog of the convergence problem studied by \textit{W. Bryc} and \textit{W. Smoleński} [ibid. 15, No. 1, 41-46 (1992; Zbl 0764.62014)]. It is shown there that in case of real random variables (non-Gaussian in general) the limit of \(E(\cdot\mid Y+ \varepsilon Z)\) is \(E(\cdot \mid Y)\) for absolutely continuous \(Y\), is \(E(\cdot\mid Y,Z)\) for discrete \(Y\), and may not exist for singular \(Y\). In the Gaussian case, even in infinite dimensions (or for Gaussian processes), the limit always exists and can be identified as is proved in Theorem 4. We describe several simple examples of stochastic processes for which the limits are those described in Theorem 4. Finally, the finite-dimensional convergence results are proved in the appendix.
0 references
noise
0 references
regression
0 references
conditional expectation
0 references
Banach-valued Gaussian random vector
0 references
Gaussian process
0 references
reproducing kernel Hilbert spaces
0 references