Information-theoretic bounds on quantum advantage in machine learning

From MaRDI portal
Publication:6357773

arXiv2101.02464MaRDI QIDQ6357773

Author name not available (Why is that?)

Publication date: 7 January 2021

Abstract: We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments. The experiments depend on an input parameter x and involve execution of a (possibly unknown) quantum process mathcalE. Our figure of merit is the number of runs of mathcalE required to achieve a desired prediction performance. We consider classical ML models that perform a measurement and record the classical outcome after each run of mathcalE, and quantum ML models that can access mathcalE coherently to acquire quantum data; the classical or quantum data is then used to predict outcomes of future experiments. We prove that for any input distribution mathcalD(x), a classical ML model can provide accurate predictions on average by accessing mathcalE a number of times comparable to the optimal quantum ML model. In contrast, for achieving accurate prediction on all inputs, we prove that exponential quantum advantage is possible. For example, to predict expectations of all Pauli observables in an n-qubit system ho, classical ML models require 2Omega(n) copies of ho, but we present a quantum ML model using only mathcalO(n) copies. Our results clarify where quantum advantage is possible and highlight the potential for classical ML models to address challenging quantum problems in physics and chemistry.




Has companion code repository: https://github.com/jonastyw/quantum-rnns








This page was built for publication: Information-theoretic bounds on quantum advantage in machine learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6357773)