Estimation theory of a class of semiparametric regression models (Q1201630)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Estimation theory of a class of semiparametric regression models |
scientific article; zbMATH DE number 98066
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Estimation theory of a class of semiparametric regression models |
scientific article; zbMATH DE number 98066 |
Statements
Estimation theory of a class of semiparametric regression models (English)
0 references
17 January 1993
0 references
Let \(y=x\beta+g(T)+\varepsilon\) be a semiparametric regression model, where \(x=(x_ 1,\dots,x_ p)\) are explanatory variables that enter linearly, \(T\) is another explanatory variable with support \([0,1]\), entering in a nonlinear fashion, \(\beta\) is a \(p\times 1\) vector of unknown parameters, \(g(\cdot)\) is an unknown smooth function of \(T\) in \([0,1]\), and \(\varepsilon\) is the random error with mean 0 and variance \(\sigma^ 2\). It is assumed that \((x,T)\) and \(\varepsilon\) are independent. Given a sample of \(n\) observations, estimates \(\widehat\beta_ n\) and \(\widehat g_ n\) of \(\beta\) and \(g\) are obtained, based on a combination of nearest neighbor rule and the least squares method. Under suitable conditions it is shown that \(\sqrt{n}(\widehat\beta_ n-\beta)\) is asymptotically normal and that \(\widehat g_ n\) converges to \(g\) with the optimal convergence rate \(n^{-1/3}\). A \(\sqrt{n}\)-consistent estimate of \(\sigma^ 2\) is also given.
0 references
asymptotic normality
0 references
semiparametric regression model
0 references
explanatory variables
0 references
combination of nearest neighbor rule and the least squares method
0 references
optimal convergence rate
0 references
0.9459156
0 references
0.92904216
0 references
0.9267338
0 references
0.9257897
0 references
0.9257897
0 references