tensorcircuit.interfaces.torch#

Interface wraps quantum function as a torch function

tensorcircuit.interfaces.torch.pytorch_interface(fun: Callable[[...], Any], jit: bool = False, enable_dlpack: bool = False) → Callable[[...], Any]#

Wrap a quantum function on different ML backend with a pytorch interface.

Example

import torch

tc.set_backend("tensorflow")


def f(params):
    c = tc.Circuit(1)
    c.rx(0, theta=params[0])
    c.ry(0, theta=params[1])
    return c.expectation([tc.gates.z(), [0]])


f_torch = tc.interfaces.torch_interface(f, jit=True)

a = torch.ones([2], requires_grad=True)
b = f_torch(a)
c = b ** 2
c.backward()

print(a.grad)
Parameters
  • fun (Callable[..., Any]) – The quantum function with tensor in and tensor out

  • jit (bool, optional) – whether to jit fun, defaults to False

  • enable_dlpack (bool, optional) – whether transform tensor backend via dlpack, defaults to False

Returns

The same quantum function but now with torch tensor in and torch tensor out while AD is also supported

Return type

Callable[…, Any]

tensorcircuit.interfaces.torch.torch_interface(fun: Callable[[...], Any], jit: bool = False, enable_dlpack: bool = False) → Callable[[...], Any][source]#

Wrap a quantum function on different ML backend with a pytorch interface.

Example

import torch

tc.set_backend("tensorflow")


def f(params):
    c = tc.Circuit(1)
    c.rx(0, theta=params[0])
    c.ry(0, theta=params[1])
    return c.expectation([tc.gates.z(), [0]])


f_torch = tc.interfaces.torch_interface(f, jit=True)

a = torch.ones([2], requires_grad=True)
b = f_torch(a)
c = b ** 2
c.backward()

print(a.grad)
Parameters
  • fun (Callable[..., Any]) – The quantum function with tensor in and tensor out

  • jit (bool, optional) – whether to jit fun, defaults to False

  • enable_dlpack (bool, optional) – whether transform tensor backend via dlpack, defaults to False

Returns

The same quantum function but now with torch tensor in and torch tensor out while AD is also supported

Return type

Callable[…, Any]

tensorcircuit.interfaces.torch.torch_interface_kws(f: Callable[[...], Any], jit: bool = True, enable_dlpack: bool = False) → Callable[[...], Any][source]#

similar to py:meth:tensorcircuit.interfaces.torch.torch_interface, but now the interface support static arguments for function f, which is not a tensor and can be used with keyword arguments

Example

tc.set_backend("tensorflow")

def f(tensor, integer):
    r = 0.
    for i in range(integer):
        r += tensor
    return r

fnew = tc.interfaces.torch_interface_kws(f)

print(fnew(torch.ones([2]), integer=3))
print(fnew(torch.ones([2]), integer=4))
Parameters
  • f (Callable[..., Any]) – _description_

  • jit (bool, optional) – _description_, defaults to True

  • enable_dlpack (bool, optional) – _description_, defaults to False

Returns

_description_

Return type

Callable[…, Any]