Estimation of autocovariance matrices for high dimensional linear processes (Q2036316)

From MaRDI portal





scientific article; zbMATH DE number 7364175
Language Label Description Also known as
English
Estimation of autocovariance matrices for high dimensional linear processes
scientific article; zbMATH DE number 7364175

    Statements

    Estimation of autocovariance matrices for high dimensional linear processes (English)
    0 references
    0 references
    0 references
    28 June 2021
    0 references
    One aspect of the problem of estimating the autovariance matrix of a multidimensional linear process is examined in this paper. It includes a brief summary of the results achieved so far in research on the subject. Following is a brief exposition of the underlying mathematics of the purpose of this paper. Let \((\Omega,F,P)\) be a probability space. For all \(t\in\mathbb{N}^*:=\{0,1,2,\dots\}\), \(X_t:\Omega\to\mathbb{R}^{p}\) is a random vector such that \[ X_t=\sum_{j=0}^\infty\Phi_{j}\cdot\varepsilon_{t-j}\quad P\text{-a.s.}, \] where \(\Phi_{j}\) is a $p\times p$ real matrix, and $(\varepsilon_{t}=(\varepsilon_{t,1},\dots,\varepsilon_{t,p})')$ is a sequence of random i.i.d. vectors with \[ E(\varepsilon_{t})=0,\quad \Sigma=E(\varepsilon_{t}\varepsilon_{t}')\text{ for all }t\in\mathbb{N}^* \] and which also satisfies one of the following conditions: \begin{itemize} \item[1)] $\varepsilon_{t}$ is a Gaussian random vector; \item[2)] \[ E(\exp(u\varepsilon_{t,i}\varepsilon_{s,j}))\leq\exp(\sigma^2u^2/2),\, u\in\mathbb{R}; \] for all $t$ and $s$ in $\mathbb{N}^*$, $1\leq i$, $j\leq p$, and $\sigma>0$; \item[3)] \(E(|\varepsilon_{t,i}|^\beta)<\infty\), for some $\beta>2$, for all $t\in\mathbb{N}^*$, $1\leq i\leq p$. \end{itemize} For each $k\in\mathbb{N}$ let $\Gamma_{k}$ be the $p\times p$ autocovariance matrix \[ \Gamma_{k}=\mathrm{Cov}(X_{t}X'_{t-k})=\sum_{j=k}^\infty\Phi_{j}\Sigma\Phi'_{j-k}. \] The main objective of this paper is to evaluate a previously proposed estimator of $\Gamma_{k}$ from a sample $X_1,\dots,X_{n}$. Let $\Gamma_{k}$ be the sample autocovariance matrix of order $0\leq k\leq n-1$ defined as \[ \Gamma_k=\frac{1}{n-k} \sum_{t=k+1}^nX_{t}X'_{t-k}=[\gamma_{i,j}^{k}]_{1\leq i,j\leq p}. \] The estimator to be evaluated is a banded version of \(\Gamma_{k}\), i.e. \[ B_{l_{n}}(\Gamma_{k})=[\gamma_{i,j,n}^{k}]_{1\leq i,j\leq p}, \] where \[ \gamma_{i,j,n}^{k} = \begin{cases} \gamma_{i,j}^{k} &\text{ if }|i-j|\leq l_{n}, \\ 0 &\text{otherwise}. \end{cases} \] \((l_{n})\) is a suitably defined sequence such that $l_{n}\to \infty$ as $n\to\infty$. This paper assumes $p=p(n)\to \infty$ as $n\to\infty$. The author considers that the major result of this paper is obtaining a rate of convergence of the sequence \[ \Vert B_{l_{n}}(\Gamma_{k})-\Gamma_{k}\Vert_2:=\sqrt{(\lambda_{\max}(MM'))}, \] where $\lambda_{\max}(MM')$ is the maximum eigenvalue of the matrix $MM'$, $M=B_{l_{n}}(\Gamma_{k})-\Gamma_{k}$. Bounds of the estimation error of $\Gamma_{k}$ are also obtained for the case of finite samples. Full and detailed proofs of the results stated in the text are given in an appendix. In conclusion, the author compares his results with similar results obtained by other authors. He acknowledges that although the convergence rate presented in this paper is better than previous ones, it is difficult to compare the autocovariance estimators because they are based on different assumptions.
    0 references
    high-dimensional data
    0 references
    linear process
    0 references
    autocovariance matrix
    0 references
    0 references

    Identifiers