Abstract
Estimating the expected value of an observable appearing in a non-equilibrium stochastic process
usually involves sampling. If the observable’s variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression,
efficiently captures high variances in systems of various geometries and dimensions. We provide
examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high-variance observable e−βW , motivated by
Jarzynski’s equality, with W the work done quenching from equilibrium at inverse temperature β,
is exactly and efficiently captured by tensor networks.
usually involves sampling. If the observable’s variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression,
efficiently captures high variances in systems of various geometries and dimensions. We provide
examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high-variance observable e−βW , motivated by
Jarzynski’s equality, with W the work done quenching from equilibrium at inverse temperature β,
is exactly and efficiently captured by tensor networks.
Original language | English |
---|---|
Article number | 090602 |
Number of pages | 5 |
Journal | Physical Review Letters |
Volume | 114 |
Issue number | 9 |
DOIs | |
Publication status | Published - 6 Mar 2015 |