Die Sigg es noch nit övversatz. Ehr luurt üch de änglesche Originalversion aan.
Sample-based Quantum Diagonalization (SQD)
Sample-based quantum diagonalization (SQD) combines classical linear algebra and the power of quantum computing to diagonalize a Hamiltonian (matrix) and compute its eigenvalues and eigenvectors. Matrix diagonalization is an important mathematical operation as many problems in science, computation and optimization use the method.
The video below gives an overview of SQD, what determines its usefulness, and what makes it faster than many other approaches. The subsequent text gives more details.
1. Introduction and motivation
Consider the energy eigenvalue equation made famous by Schrödinger, as an example.
is the Hamiltonian of a system, is the wave function (also known as the eigenstate), and is an eigenvalue. The eigenvalues of matrix represent the energy levels of the system. For example, if the system is a molecule, the lowest eigenvalue represents the ground state energy of the molecule. In many problems, we are interested in ground state energy estimation.
By applying exact diagonalization techniques from linear algebra, we can diagonalize the full matrix. However, the approach becomes computationally expensive (even impossible) as the matrix gets larger. For example, even for small chemical molecules, can be prohibitively large (for example, Hamiltonian for the molecule with a cc-PVDZ basis has a dimension of
Fortunately, we do not always need all the eigenvalues and eigenvectors of a Hamiltonian , and therefore, diagonalization of the full matrix is not required in many practical cases. For example, in case of ground state estimation, we are interested in the lowest eigenvalue and the corresponding eigenvector. This allows us to apply the concept of projection onto a (useful) subspace.
Consider an matrix, , where the complete vector space (Hilbert space) has a dimension of ( is large). Next, we select a subspace () - which is a subset of the full Hilbert space - of dimension , where is sufficiently small. After projecting onto this subspace, the projected matrix (say, ) will be smaller (). The smaller can be diagonalized using a suitable classical numerical method, and generate eigenvalues and eigenvectors for that subspace.
Note that the subspace must be in support of our target (for example, ground) eigenstate. In other words, the projected Hamiltonian must be in a subspace that includes the lowest eigenvalue.
2. Projection and Diagonalization
Consider we want to find the lowest eigenvalue and the corresponding eigenvector for the following Hamiltonian matrix .
We will diagonalize the full matrix along with different projected versions () for different subspaces to demonstrate the scalability and importance of the choice of subspace.
The ground state energy (minimum eigenvalue) of the matrix is and the exact ground state wave function (eigenvector) is:
that is, the ground state of the matrix is spanned by two computational basis states (vectors) and .
# Added by doQumentation — required packages for this notebook
!pip install -q numpy scipy
import numpy as np
from scipy.linalg import eigh
np.set_printoptions(precision=4, sign="-", suppress=True, linewidth=100)
H = np.array(
[
[0.2235, -0.039, -0.1035, -0.0818, 0.1746, 0.1091, 0.1165, -0.0104],
[-0.0390, 0.6621, 0.0706, -0.1964, -0.0782, 0.2619, 0.1095, 0.0029],
[-0.1035, 0.0706, 0.9961, 0.1724, 0.1067, -0.2299, -0.1817, 0.1571],
[-0.0818, -0.1964, 0.1724, -0.1773, 0.1019, -0.4778, -0.1272, -0.0414],
[0.1746, -0.0782, 0.1067, 0.1019, 0.1418, -0.1359, -0.1793, -0.0766],
[0.1091, 0.2619, -0.2299, -0.4778, -0.1359, 0.1014, 0.1696, 0.0552],
[0.1165, 0.1095, -0.1817, -0.1272, -0.1793, 0.1696, 0.4227, 0.2702],
[-0.0104, 0.0029, 0.1571, -0.0414, -0.0766, 0.0552, 0.2702, 0.4456],
]
)
eigvals, eigvecs = eigh(H)
print("Eigenvalues:")
print(eigvals)
print(f"Minimum eigenvalue: {eigvals.min()}")
print("\nEigenvectors (columns represent vectors):")
print(eigvecs)
print("\nEigenvector for the minimum eigenvalue (ground state)")
print(eigvecs[:, np.argmin(eigvals)])
Eigenvalues:
[-0.5357 -0.1321 0.1049 0.1258 0.3616 0.6405 0.947 1.3039]
Minimum eigenvalue: -0.5356560029438817
Eigenvectors (columns represent vectors):
[[-0. -0.5612 0.098 -0.0024 0.8051 -0.0806 0.0643 0.1288]
[-0. -0.1403 -0.1985 -0.4249 -0.0092 0.585 -0.5952 0.2526]
[ 0. 0.0416 0.3041 0.2122 0.1509 -0.0139 -0.5794 -0.7086]
[ 0.8 -0.1936 -0.0127 -0.4376 -0.1081 -0.0838 0.1557 -0.2966]
[ 0. 0.6716 -0.3535 -0.2552 0.5395 0.0954 0.1449 -0.1941]
[ 0.6 0.258 0.017 0.5834 0.1441 0.1118 -0.2076 0.3954]
[ 0. 0.3088 0.5504 -0.4197 0.0626 -0.468 -0.2625 0.3657]
[-0. -0.1146 -0.6559 0.0356 -0.0394 -0.6352 -0.3856 0.0418]]
Eigenvector for the minimum eigenvalue (ground state)
[-0. -0. 0. 0.8 0. 0.6 0. -0. ]
Next, we will project the matrix onto different subspaces and check if we can get the exact ground state. In particular, we will project the matrix onto a subspace spanned by:
- exact ground state vectors ( and ).
- vectors that excludes some or all exact ground state vectors (for example, , , and ).
- vectors that include both exact ground state and out-of-ground state (but not all possible vectors in the Hilbert space).
2.1 Case-1: Subspace includes ground state
Suppose we want to project in a subspace () spanned by two vectors and . The projected Hamiltonian is defined by: