To refer to this page use:
|Abstract:||Suppose we have many copies of an unknown n-qubit state . We measure some copies of using a known two-outcome measurement E1, then other copies using a measurement E2, and so on. At each stage t, we generate a current hypothesis about the state , using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that , the error in our prediction for the next measurement, is at least at most times. Even in the 'non-realizable' setting—where there could be arbitrary noise in the measurement outcomes—we show how to output hypothesis states that incur at most excess loss over the best possible state on the first T measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results—using convex optimization, quantum postselection, and sequential fat-shattering dimension—which have different advantages in terms of parameters and portability.|
|Citation:||Aaronson, Scott, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. "Online learning of quantum states." Journal of Statistical Mechanics: Theory and Experiment 2019, no. 12 (2019). doi:10.1088/1742-5468/ab3988|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||Journal of Statistical Mechanics: Theory and Experiment|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.