Laplace Power-expected-posterior priors for generalized linear models with applications to logistic regression
From MaRDI portal
Publication:6384844
arXiv2112.02524MaRDI QIDQ6384844
Author name not available (Why is that?)
Publication date: 5 December 2021
Abstract: Power-expected-posterior (PEP) methodology, which borrows ideas from the literature on power priors, expected-posterior priors and unit information priors, provides a systematic way to construct objective priors. The basic idea is to use imaginary training samples to update a noninformative prior into a minimally-informative prior. In this work, we develop a novel definition of PEP priors for generalized linear models that relies on a Laplace expansion of the likelihood of the imaginary training sample. This approach has various computational, practical and theoretical advantages over previous proposals for non-informative priors for generalized linear models. We place a special emphasis on logistic regression models, where sample separation presents particular challenges to alternative methodologies. We investigate both asymptotic and finite-sample properties of the procedures, showing that is both asymptotic and intrinsic consistent, and that its performance is at least competitive and, in some settings, superior to that of alternative approaches in the literature.
Has companion code repository: https://github.com/Anupreet-Porwal/LPEP-Paper-Analysis
This page was built for publication: Laplace Power-expected-posterior priors for generalized linear models with applications to logistic regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6384844)