Principles of experimental design for big data analysis
From MaRDI portal
Publication:1750251
DOI10.1214/16-STS604zbMath1442.62174WikidataQ41615597 ScholiaQ41615597MaRDI QIDQ1750251
James M. McGree, Christopher C. Holmes, Elizabeth G. Ryan, Sylvia Richardson, Christopher C. Drovandi, Kerrie L. Mengersen
Publication date: 18 May 2018
Published in: Statistical Science (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.ss/1504253123
Related Items (11)
Robust active learning with binary responses ⋮ Bivariate Residual Plots With Simulation Polygons ⋮ Design of experiments and machine learning to improve robustness of predictive maintenance with application to a real case study ⋮ Information-based optimal subdata selection for non-linear models ⋮ Accounting for outliers in optimal subsampling methods ⋮ A model robust subsampling approach for generalised linear models in big data settings ⋮ Predictive Subdata Selection for Computer Models ⋮ Subdata selection based on orthogonal array for big data ⋮ Generation of all randomizations using circuits ⋮ Experimental Design Issues in Big Data: The Question of Bias ⋮ On greedy heuristics for computing D-efficient saturated subsets
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bayesian nonparametric weighted sampling inference
- Struggles with survey weighting and regression modeling
- PCA and PLS with very large data sets
- Optimal designs to select individuals for genotyping conditional on observed binary or survival outcomes and non-genetic covariates
- Simulation-based fully Bayesian experimental design for mixed effects models
- Improving the efficiency of fully Bayesian optimal design of experiments using randomised quasi-Monte Carlo
- Least angle regression. (With discussion)
- Statistical analysis and modeling of Internet VoIP traffic for network engineering
- Discriminative variable selection for clustering with the sparse Fisher-EM algorithm
- Bayesian variable selection using cost-adjusted BIC, with application to cost-effective measurement of quality of health care
- Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data
- Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations
- Adjusted likelihoods for synthesizing empirical evidence from studies that differ in quality and design: effects of environmental tobacco smoke
- Approximate Bayesian Inference for Latent Gaussian models by using Integrated Nested Laplace Approximations
- Nonparametric Independence Screening in Sparse Ultra-High-Dimensional Additive Models
- Sampling and Bayes' Inference in Scientific Modelling and Robustness
- Optimal allocation of time points for the random effects model
- Optimal design in random-effects regression models
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data
- A Scalable Bootstrap for Massive Data
- Data Mining Algorithms
- A survey on concept drift adaptation
- Bayesian-Optimal Design via Interacting Particle Systems
This page was built for publication: Principles of experimental design for big data analysis