Improvement of some multidimensional estimates by reduction of dimensionality (Q1898400)

From MaRDI portal





scientific article; zbMATH DE number 797146
Language Label Description Also known as
English
Improvement of some multidimensional estimates by reduction of dimensionality
scientific article; zbMATH DE number 797146

    Statements

    Improvement of some multidimensional estimates by reduction of dimensionality (English)
    0 references
    0 references
    18 September 1996
    0 references
    Consider \(k\) independent \(p\)-dimensional random vectors \(Y_i\) for \(i= 1,\dots, k\), with mean \(\mu_i\) and covariance matrix \(\Sigma\) (or equivalently a \((k\times p)\) random matrix \(Y\) with mean \(\mu\) such that \(\mu^T= [\mu_1, \dots, \mu_p]\) and covariance matrix \(I_k \otimes \Sigma\)). The problem we are dealing with is the estimation of the mean matrix \(\mu\). When \(Y\) is normally distributed and \(\Sigma\) is known, it has been proved [see, e.g., \textit{C. Stein}, Proc. 3rd Berkeley Symp. Math. Stat. Probab. 1, 197-206 (1956; Zbl 0073.35602)] that the sample mean is not admissible in the sense of larger mean square error when \(p\) is strictly greater than two. Stein also proposed in this paper an estimate which might dominate the least squares estimate. Since that time, considerable effort has gone into improving the James-Stein estimate, particularly by considering unknown \(\Sigma\). These works actually aim at proposing classes of estimates (essentially refinements of the basic James-Stein estimate) depending on a real value function. We focus on a different class of estimates. Indeed, if we denote by \(\widehat {\mu}_i\), the \(i\)th row of the sample mean, we investigate estimates of the form \(\widehat {\pi}_m (X_i)+ \overline {\mu}\), where \(\widehat {\pi}_m\) is an orthogonal projection onto an \(m\)-dimensional subspace of \({\mathfrak R}^p\), \(\overline {\mu}\) is a weighted average of the \(\widehat {\mu}_i\)'s and \(X_i= \widehat {\mu}_i- \overline {\mu}\). Actually we will deal with a more general framework. Moreover, we do not assume that the data are necessarily normal, nor do we restrict ourselves to estimating the mean matrix. It is sufficient here to proceed with an unbiased estimate of \(\mu\) and a consistent and unbiased estimate of \(\Sigma\). This more general model will be presented in Section 2, where a way of reaching optimal projections is proposed. Since we aim at deriving an optimal estimate (in the sense of minimum mean square error (MSE)) in the class defined above, we derive, in Section 4, the asymptotic expansions of the risk functions. These results are based on asymptotics developed in Section 3. We also propose, in Section 4, estimates of these MSEs suitable for practical use. We return in Section 5 to the original situation for which the MSEs and their estimates have an appealing form. We also give in the final section a short numerical example. For the sake of simplicity we illustrate in detail only this particular case, but other examples could have been presented.
    0 references
    unbiased estimates
    0 references
    projections
    0 references
    mean matrix
    0 references
    optimal estimate
    0 references
    minimum mean square error
    0 references
    asymptotic expansions
    0 references
    risk functions
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references