Stochastic Variance Reduction Methods for Policy Evaluation
From MaRDI portal
Publication:6283618
arXiv1702.07944MaRDI QIDQ6283618
Author name not available (Why is that?)
Publication date: 25 February 2017
Abstract: Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts states' long-term value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.
This page was built for publication: Stochastic Variance Reduction Methods for Policy Evaluation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6283618)