LeadCache: Regret-Optimal Caching in Networks
From MaRDI portal
Publication:6349308
arXiv2009.08228MaRDI QIDQ6349308
Author name not available (Why is that?)
Publication date: 17 September 2020
Abstract: We consider an online prediction problem in the context of network caching. Assume that multiple users are connected to several caches via a bipartite network. At any time slot, each user may request an arbitrary file chosen from a large catalog. A user's request at a slot is met if the requested file is cached in at least one of the caches connected to the user. Our objective is to predict, prefetch, and optimally distribute the files on the caches at each slot to maximize the total number of cache hits. The problem is non-trivial due to the non-convex and non-smooth nature of the objective function. In this paper, we propose - an efficient online caching policy based on the Follow-the-Perturbed-Leader paradigm. We show that is regret-optimal up to a factor of where is the number of users. We design two efficient implementations of the policy, one based on Pipage rounding and the other based on Madow's sampling, each of which makes precisely one call to an LP-solver per iteration. Furthermore, with a Strong-Law-type assumption, we show that the total number of file fetches under remains almost surely finite over an infinite horizon. Finally, we derive an approximately tight regret lower bound using results from graph coloring. We conclude that the learning-based policy decisively outperforms the state-of-the-art caching policies both theoretically and empirically.
Has companion code repository: https://github.com/abhishekmitiitm/leadcache-neurips21
This page was built for publication: LeadCache: Regret-Optimal Caching in Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6349308)