Efficiency and Robustness of Rosenbaum's Regression (Un)-Adjusted Rank-based Estimator in Randomized Experiments

From MaRDI portal
Publication:6384389

arXiv2111.15524MaRDI QIDQ6384389

Author name not available (Why is that?)

Publication date: 30 November 2021

Abstract: Mean-based estimators of the causal effect in a completely randomized experiment (e.g., the difference-in-means estimator) may behave poorly if the potential outcomes have a heavy-tail, or contain outliers. We study an alternative estimator by Rosenbaum that estimates the constant additive treatment effect by inverting a randomization test using ranks. By investigating the breakdown point and asymptotic relative efficiency of this rank-based estimator, we show that it is provably robust against heavy-tailed potential outcomes, and has variance that is asymptotically, in the worst case, at most about 1.16 times that of the difference-in-means estimator; and its variance can be much smaller when the potential outcomes are not light-tailed. We further derive a consistent estimator of the asymptotic standard error for Rosenbaum's estimator which yields a readily computable confidence interval for the treatment effect. Further, we study a regression adjusted version of Rosenbaum's estimator to incorporate additional covariate information in randomization inference. We prove gain in efficiency by this regression adjustment method under a linear regression model. We illustrate through synthetic and real data that, unlike the mean-based estimators, these rank-based estimators (both unadjusted or regression adjusted) are efficient and robust against heavy-tailed distributions, contamination, and model misspecification.




Has companion code repository: https://github.com/ghoshadi/rre








This page was built for publication: Efficiency and Robustness of Rosenbaum's Regression (Un)-Adjusted Rank-based Estimator in Randomized Experiments

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6384389)