线路优化与线路表达能力关系探究#

概述#

在本教程中,我们将展示电路参数优化和电路可表达性之间的关系。 利用变分量子本征求解器(VQE)算法,我们可以得到量子多体系统的基态能量。 随机参数化电路虽然层数越多,可表达性越大,但由于基态的纠缠熵满足“面积定律”,随机电路产生的纠缠无法得到准确的基态能量。 这里考虑的模型如下:

\[H = \sum_{i=1}^{n}\sigma_{i}^{z}\sigma_{i+1}^{z}+\sum_{i=1}^{n}\sigma_{i}^{x},\]

其中 \(\sigma_{i}^{x,z}\) 是第 \(i\) 个量子比特的泡利矩阵。 可表达性是通过 Renyi 纠缠熵来衡量的:

\[R_{A}^{2} = - \rm{log} (\rm{tr}_{A}\rho_{A}^{2}),\]

其中\(\rho_{A}\)是通过部分追踪子系统\(A\)得到的。

设置#

[1]:
import numpy as np
import tensorflow as tf
import tensorcircuit as tc
import quspin

tc.set_backend("tensorflow")
tc.set_dtype("complex128")
dtype = np.complex128

参数#

[2]:
N = 4  # 量子比特数
NA = [i for i in range(int(N / 2))]  # 子系统 A
L = 2  # 电路层数
num_trial = 2  # 随机电路实例数

精确对角化#

[3]:
basis = quspin.basis.spin_basis_1d(N)
J_zz = [[1.0, i, (i + 1) % N] for i in range(N)]
J_x = [[1.0, i] for i in range(N)]
H_TFIM = quspin.operators.hamiltonian(
    [["zz", J_zz], ["x", J_x]], [], basis=basis, check_symm=False, dtype=np.float64
)
E, V = H_TFIM.eigh()
E0 = E[0]
print("The ground state energy: ", E0)
Hermiticity check passed!
The ground state energy:  -5.226251859505491

输出状态的密度矩阵#

电路架构如下所示,每个模块包括两个沿 y 轴 (\(R_{y}\)) 的 Pauli 旋转门,然后是 \(CZ\) 门。

total.jpeg

[4]:
@tf.function
def get_state(v):
    layers = len(v)
    c = tc.Circuit(N)
    # Ry+Cz
    for layer in range(layers):
        if layer % 2 == 0:
            for i in range(0, N, 2):
                c.ry(i, theta=v[layer, i])
                c.ry(i + 1, theta=v[layer, i + 1])
                c.cz(i, i + 1)
        else:
            for i in range(1, N, 2):
                c.ry(i, theta=v[layer, i])
                c.ry((i + 1) % N, theta=v[layer, (i + 1) % N])
                c.cz(i, (i + 1) % N)
    ψ = c.wavefunction()  # 输出态
    return ψ

哈密顿量#

[5]:
def get_hamiltonian():
    h = []
    w = []
    ###ZZ
    for i in range(N):
        h.append([])
        w.append(1.0)  # weight
        for j in range(N):
            if j == (i + 1) % N or j == i:
                h[i].append(3)
            else:
                h[i].append(0)
    ###potential
    for i in range(N):
        h.append([])
        w.append(1.0)
        for j in range(N):
            if j == i:
                h[i + N].append(1)
            else:
                h[i + N].append(0)

    hamiltonian = tc.quantum.PauliStringSum2Dense(
        tf.constant(h, dtype=tf.complex128), tf.constant(w, dtype=tf.complex128)
    )
    return hamiltonian

损失函数#

\[C(\theta) = \langle H \rangle.\]
[6]:
@tf.function
def energy_loss(v, hamiltonian):
    ψ = get_state(v)
    ψ = tf.reshape(ψ, [2**N, 1])
    loss = tf.transpose(ψ, conjugate=True) @ hamiltonian @ ψ
    loss = tc.backend.real(loss)
    return loss

Renyi 纠缠熵#

\[R_{A}^{2} = - \rm{log} (\rm{tr}_{A}\rho_{A}^{2}),\]

其中 \(\rho_{A}\) 是通过部分追踪子系统 \(A\) 得到的。

[7]:
@tf.function
def entropy(v):
    ψ = get_state(v)
    ρ_reduced = tc.quantum.reduced_density_matrix(ψ, NA)  # 部分描绘出子系统A得到的降密度矩阵
    S = tc.quantum.renyi_entropy(ρ_reduced)  # Renyi 纠缠熵
    return S

主优化循环#

[8]:
def opt_main(v):
    opt = tc.backend.optimizer(tf.keras.optimizers.Adam(1e-2))

    hamiltonian = get_hamiltonian()

    loss_and_grad = tc.backend.jit(tc.backend.value_and_grad(energy_loss, argnums=0))
    loss_and_grad_vag = tc.backend.jit(
        tc.backend.vvag(loss_and_grad, argnums=0, vectorized_argnums=0)
    )  # 使用 vvag 获取不同随机电路实例的损失和梯度
    entropy_vag = tc.backend.jit(
        tc.backend.vmap(entropy, vectorized_argnums=0)
    )  # 使用vmap获取不同随机电路实例的 renyi 纠缠熵

    maxiter = 100
    for i in range(maxiter):
        loss, gr = loss_and_grad_vag(v, hamiltonian)

        if i == 0:
            E_initial_avg = tf.reduce_mean(loss[0])  # 初态的能量
            S_initial = entropy_vag(v)
            S_initial_avg = tf.reduce_mean(S_initial[0])  # 初态的 renyi 纠缠熵
        elif i == maxiter - 1:
            E_final_avg = tf.reduce_mean(loss[0])  # 末态的能量
            S_final = entropy_vag(v)
            S_final_avg = tf.reduce_mean(S_final[0])  # 末态的 renyi 纠缠熵

        v = opt.update(gr, v)

    return (
        E_initial_avg.numpy(),
        E_final_avg.numpy(),
        S_initial_avg.numpy(),
        S_final_avg.numpy(),
    )
[9]:
v = tf.random.uniform([num_trial, L, N], minval=0.0, maxval=2 * np.pi, dtype=tf.float64)
E_initial, E_final, S_initial, S_final = opt_main(v)
2022-02-18 15:28:57.668768: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-02-18 15:28:58.399077: I tensorflow/compiler/xla/service/service.cc:171] XLA service 0x7fe206d450b0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-02-18 15:28:58.399104: I tensorflow/compiler/xla/service/service.cc:179]   StreamExecutor device (0): Host, Default Version
2022-02-18 15:28:58.581897: I tensorflow/compiler/jit/xla_compilation_cache.cc:351] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
[10]:
print("Number of layers = ", L, ";")
print(
    "ΔE (initial) = ",
    E_initial - E0,
    ";",
    "ΔE (final) = ",
    E_final - E0,
)
print(
    "S (initial) = ",
    S_initial,
    ";",
    "S (final) = ",
    S_final,
)
Number of layers =  2 ;
ΔE (initial) =  6.496137029102603 ; ΔE (final) =  3.666036635713118
S (initial) =  0.561303369826885 ; S (final) =  0.07905923157886821

结果#

\(n = 12\) 个量子比特和不同层数 \(L\) 的系统的能量和 renyi 熵,平均超过 13 个随机电路实例。 对于每个独立的随机实例,我们将通过等待 1000 步参数更新来留出足够的时间来收敛到基态。 我们可以看到,随机参数处的 Renyi 纠缠熵在其最大可能值附近饱和,对达到 VQE 哈密顿量的基态具有不利影响。

renyiN12.jpeg

参考资料#

参考论文: https://arxiv.org/pdf/2201.01452.pdf.