Can classical big data be loaded fast enough to achieve linear algebra based quantum machine learning speed-ups?

Cite this problem as Problem 44.


Does there exist an efficient, experimentally implementable oracle that allows to access classical big data in any superposition? Here, efficient means: such that the exponential speed-up promised by recent linear algebra-based quantum machine learning algorithms is preserved.


While Quantum Computation promises significant speed-ups over classical computers, the known tasks that yield those speed-ups appear rather artificial. The emerging field of quantum machine learning might provide significant quantum speed-ups for many real-life applications.
The basic idea why quantum machine learning might offer huge advantages is as follows: Many machine learning algorithms heavily rely on linear algebra, e.g. support vector machines or neural networks [17]. But quantum mechanics is all about linear algebra, implementing it directly at its most fundamental level!

Perhaps the most famous quantum algorithm that already seems to take this observation from intuition to practice is called HHL after its inventors A. Harrow, A. Hassidim and S. Lloyd [6]: it promises to solve many problems that can be traced back to solving A\cdot \vec{x} = \vec{b} for \vec{x} with an exponential speed-up. This algorithm is at the root of several particular quantum machine learning algorithms taylored to specific applications (e.g. [14, 15]). Other linear algebra quantum algorithms with the potential to yield exponential speed-ups (e.g. [23], [10], [11], applied e.g. to [16]) exist as well, leading to a toolbox called QBLAS (quantum basic linear algebra subroutines) in [5].

Let n be the dimension of the vectors. Problems like matrix multiplication and linear equations can often be solved classically in a number of steps polynomial in n. At the same time, even writing down the output or input vector representing the data seems to require order n steps – how does this leave any room for exponential speed-ups in n? This question is at the heart of some of the crucial caveats and limitations that are discussed, e.g., in [1], [2], [5]. The promised exponential speed-up assumes that the input data is “already there”.

How can we load the full classical big data into a quantum computer while not destroying the exponential speed-ups? How can we do this fast enough to take advantage of quantum phenomena like superposition and entanglement?

Partial solutions

1) For some particular input data, the quantum computer might be able to prepare it quickly in a few number of steps (see e.g. [18]). But that possibility means that the data has to be very simple in the sense that a quantum computer can reproduce it quickly.

In this spirit, [12] reported an application of HHL to computing electromagnetic scattering crosssections, carefully taking care of all the caveats and still keeping the exponential advantage over all known classical algorithms.

2) Another possibility might be that the input data is actually the output of another quantum algorithm that the quantum computer ran before. However, that is usually not the case for recent big data that is collected by purely classical and observational means (and not the output of computations with only few and simple inputs).

3) The idea that comes closest to being a universal solution is called a “quantum random access memory” (“QRAM” for short). It uses a binary tree-structure (called “fanout” in [3]) with the nodes switching the incoming current according to the address pointer in order to send the current to the right memory cell (the memory cells are the leaves at the end of the tree-graph). [3] and [4] suggested to use a “bucket-brigade” modification, meaning that only those nodes that are needed are actually used. This seems to reduce the need for quantum error correction only to the active nodes instead of the full addressing mechanism. The bucket-brigade QRAM promises to load the data in a number of steps that is logarithmic in the vector size (and logarithmic in the number of data points that are supposed to be put into superposition) [1]. However, this speed can only be achieved if the input data is rather uniform, with no part dominating the other parts. If the data is not uniform enough, then the advantage of the QRAM might be lost (see e.g. [1] [2] [9]). This means that the QRAM can at best achieve the desired speeds for some data.

However, in [20] the authors suggest that the problem of a QRAM requiring rather uniform inputs might be solvable for quantum machine learning. The argument goes like this: Since Machine Learning relies on observational data that comes with measurement errors, the classification should be invariant under small perturbations. The idea is to use a discretization of the data points such that the data points are bounded away from zero by some \epsilon. Then also the maximal non-uniformity is bounded, with a bound that can be chosen to be independent of the input size. Therefore, the authors argue that the number of memory queries can be chosen to be constant in the input size.

But there are many more conceptual subtleties that lead people to question the complexity advantages of QRAM:
– What will the complexity of the required quantum error correction turn out to be in actual implementations? Does it destroy the advantages the QRAM promises [7]?
– Even if not all components need to be simultaneosly active, what is the fairest classical comparison? While [3, 4] see the QRAM in direct tradition of the conventional RAM, [1] argues that it might be worth to compare HHL-like quantum algorithms using a QRAM to the classical parallel model of computation [8]. This classical model of highly parallel computation can solve many linear algebra problems such as matrix inversion and linear equations in \log^2(n) steps as well [1] [13]. [1] suggests that both HHL-like quantum algorithms with QRAM and highly parallel classical computation might suffer from similar problems preventing a large scale implementation: Very much communication and connectivity between the components might be needed, leading to massive latencies and an abundance of hidden complexity.

Similarly, in [19], the authors have also proposed circuits for the purpose of loading classical data into a quantum computer with logarithmic circuit depth. In particular, they have also looked at the problem of maintaining an exponential speed-up for classical algorithms that already have polynomial complexity. However, the authors focus on particular states that represent classical input data \{A_{x,y}\}_{x,y} in the form \sum_{x,y} |x\rangle |y \rangle |A_{x,y}\rangle. So it is not directly clear how to create arbitrary coefficients or entangled states efficiently.

4) It is well known that a lot of many-body systems usually cannot be solved numerically, e.g. because of the exponential scaling of the Hilbert space dimension in the number of particles. The approach of tensor networks tackles this problem by exploiting the idea that most states in Hilbert space are “untypical” or “irrelevant for most physics” (see e.g. [22]). Using tensor networks to define a subset of states one considers to be typical or natural, much larger systems become numerically tractable. Therefore in [21], the authors suggest to use the same kind of reasoning for quantum machine learning: not all vectors in Hilbert space are relevant, and using tensor networks might reduce the complexity a lot.

The authors promise that quantum networks could be implemented soon on small, near-term quantum devices with input and output dimension that is much larger than the number of physical qubits required. To be more exact, they claim that particular tensor network architectures allow the number of physical qubits required for quantum machine learning to scale logarithmically or even independently of the processed data size. The authors focus on tree tensor networks and matrix product states. Also the authors suggest that quantum machine learning algorithms based on tensor network states might be rather robust to noise, making the need for quantum error correction much smaller. On a side node, the authors say that tensor networks are also extremely successful in classical machine learning and claim that already a quantum computation with 16 or more virtual qubits could outperform any classical tensor network calculation they know.

The content of this Open Quantum Problem is based on [1], [2], [5]. For more details one should read these reviews as well as the original publications.


[1] C. Ciliberto, M. Herbster, A. D. Ialongo, M. Pontil, A. Rocchetto, S. Severini, L. Wossnig, “Quantum machine learning: a classical perspective”, Proc. R. Soc. A, vol. 474, no. 2209, p. 20170551. The Royal Society (2018), arXiv:1707.08561,
[2] S. Aaronson, “Read the fine print”, Nat. Phys. 11, 291–293 (2015)
[3] V. Giovannetti, S. Lloyd, and L. Maccone, “Architectures for a quantum random access memory”, Phys. Rev. A 78, 052310 (2008), arXiv:0807.4994
[4] V. Giovannetti, S. Lloyd, and L. Maccone. “Quantum random access memory”, PRL 100, 160501 (2008), arXiv:0708.1879
[5] J. Biamonte, P. Wittek,     N. Pancotti, P. Rebentrost,     N. Wiebe, S. Lloyd, “Quantum machine learning”, Nature 549, 195–202 (2017), arXiv:1611.09347
[6] A. W. Harrow, A. Hassidim, S. Lloyd, “Quantum Algorithm for Linear Systems of Equations”, PRL 103, 150502 (2009), arXiv:0811.3171
[7] S. Arunachalam, V. Gheorghiu, T. Jochym-O’Connor, M. Mosca, P. V. Srinivasan, “On the robustness of bucket brigade quantum RAM”, New J. Phys. 17 (2015) 123010, arXiv:1502.03450
[8] D. Heller. “A survey of parallel algorithms in numerical linear algebra”, Siam Review 20(4) (1978), pp. 740–777
[9] A. Prakash, “Quantum algorithms for linear algebra and machine learning”, PhD thesis, University of California, Berkeley, 2014
[10] S. Lloyd, M. Mohseni, P. Rebentrost, “Quantum principal component analysis”, Nat. Phys. 10, 631–633 (2014), arXiv:1307.0401
[11] P. Rebentrost, A. Steffens, S. Lloyd, “Quantum singular value decomposition of non-sparse low-rank matrices”, Phys. Rev. A 97, 012327 (2018), arXiv:1607.05404
[12] B. D. Clader, B. C. Jacobs, C. R. Sprouse, “Preconditioned Quantum Linear System Algorithm”, PRL 110, 250504 (2013), arXiv:1301.2340
[13] L. Csanky, “Fast parallel matrix inversion algorithms”, SIAM J. Comput., 5(4), 618–623 (1976)
[14] N. Wiebe, D. Braun, S. Lloyd, “Quantum Algorithm for Data Fitting”, PRL 109, 050505 (2012), arXiv:1204.5242
[15] Z. Zhao, J. K. Fitzsimons, J. F. Fitzsimons, “Quantum assisted Gaussian process regression”, arXiv:1512.03929
[16] M. Schuld, I. Sinayskiy, F. Petruccione, “Prediction by linear regression on a quantum computer”, Phys. Rev. A 94, 022342 (2016), arXiv:1601.07823
[17] C. M. Bishop et al, “Pattern recognition and machine learning”, Vol. 1, Springer New York, 2006.
[18] L. Grover and T. Rudolph, “Creating superpositions that correspond to efficiently integrable probability distributions”, arXiv:quant-ph/0208112
[19] J. A. Cortese, T. M. Braje, “Loading Classical Data into a Quantum Computer”, arXiv:1803.01958
[20] Z. Zhao, V. Dunjko, J. K. Fitzsimons, P. Rebentrost, J. F. Fitzsimons “A note on state preparation for quantum machine learning”, arXiv:1804.00281
[21] W. Huggins, P. Patel, K. B. Whaley, E. M. Stoudenmire, “Towards Quantum Machine Learning with Tensor Networks”, arXiv:1803.11537
[22] R. Orus, “A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States”, Annals of Physics 349 (2014) 117-158, arXiv:1306.2164
[23] F. G. S. L. Brandão, A. Kalev, T. Li, C. Y.-Y. Lin, K. M. Svore, X. Wu, “Quantum SDP Solvers: Large Speed-ups, Optimality, and Applications to Quantum Learning”, arXiv:1710.02581