A Bandit-Learning Approach to Multifidelity Approximation
DOI10.1137/21M1408312zbMath1478.62007arXiv2103.15342OpenAlexW3150190863MaRDI QIDQ5022495
Robert M. Kirby, Yiming Xu, Vahid Keshavarzzadeh, Akil C. Narayan
Publication date: 19 January 2022
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.15342
Computational methods for problems pertaining to statistics (62-08) Linear regression; mixed models (62J05) Monte Carlo methods (65C05) Learning and adaptive systems in artificial intelligence (68T05) Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs (65N30) Statistical aspects of big data and data science (62R07)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- UCB revisited: improved regret bounds for the stochastic multi-armed bandit problem
- Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Application to transport and continuum mechanics.
- Asymptotically efficient adaptive allocation rules
- On control variate estimators
- Support-vector networks
- A generalized approximate control variate framework for multifidelity uncertainty quantification
- A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems
- Optimal Model Management for Multifidelity Monte Carlo Estimation
- A Stochastic Collocation Algorithm with Multifidelity Models
- Multilevel Monte Carlo Methods
- Certified Reduced Basis Methods for Parametrized Partial Differential Equations
- Accurate Uncertainty Quantification Using Inaccurate Computational Models
- Linearly Parameterized Bandits
- On Multilevel Best Linear Unbiased Estimators
- Multilevel Monte Carlo Path Simulation
- $\mathcal{H}_2$ Model Reduction for Large-Scale Linear Dynamical Systems
- Pure Exploration in Multi-armed Bandits Problems
- Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays-Part II: Markovian rewards
- Turbulence and the dynamics of coherent structures. I. Coherent structures
- Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
- The Nonstochastic Multiarmed Bandit Problem
- Asymptotic Analysis of Multilevel Best Linear Unbiased Estimators
- MFNets: MULTI-FIDELITY DATA-DRIVEN NETWORKS FOR BAYESIAN LEARNING AND PREDICTION
- Bandit Algorithms
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
- Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems
- Finite-time analysis of the multiarmed bandit problem
This page was built for publication: A Bandit-Learning Approach to Multifidelity Approximation