# Quantum Neural Networks (QNN)¶

Classiq's QNN package is integrated with `PyTorch`

and allows the user to define `torch`

networks, with the addition of quantum-layers.

## Introduction and Background¶

Note: this introduction assumes basic knowledge of classical neural networks.

One can describe a neural network as a list of layers, where each layers takes in a vector, and outputs a different vector.
We will treat the vectors as 1-dimensional, and other dimensions are identical up to `reshape`

.
The important point is that the input and output of each layer is a vector of classical data. We'll call it a `list`

of `float`

s.

This implies that any quantum layer, assuming the structure of the data-transfer between the layers doesn't change, must take in classical data and return classical data.
This is done by renaming the incoming data as `Parameters`

for the circuit, afterwards measuring the quantum layer, and then applying some classical post-processing calculations.

### Examples¶

One example for post-processing outputs of a quantum layer (which is a quantum circuit) is returning a single number between `0`

and `1`

, indicating the confidence of a single choice.
This is most common in cases of binary classification, where a single qubit is measured, and the output of the quantum layer is `the amount of |0> measured`

divided by `the total amount of measurements`

.

Another example for post processing can be simply returning the probability (or amplitude) of each result. i.e., if the measurement result is

```
{
"00": 10,
"01": 20,
"10": 30,
"11": 40,
}
```

Then the output of the quantum layer can be:

```
[0.1 , 0.2 , 0.3 , 0.4]
```

Where we normalized by the amount of measurements, and the result is an ordered list of the probabilities of each result.

### Parameters - inputs vs weights¶

A complete quantum layer takes 2 types of parameters: "inputs" and "weights".

The "input" parameters handle the encoding of the data (the classical `list`

of `float`

s), whereas the "weight" parameters undergo gradient descent in the usual NN way.

The "input" parameters will usually be handled by the first sub-layer, while the "weight" parameters will usually be handled by the rest of the sub-layers.

In Classiq, we seperate between the 2 types of parameters by their initial name:

- "input_something" or "i_something" for inputs
- "weight_something" or "w_something" for weights

## Classiq's API¶

`QLayer`

¶

Classiq exports the `QLayer`

object, which inherits from `torch.nn.Module`

(like most objects in the `torch.nn`

namespace), and it acts like one!
for example:

```
class MyNet(nn.Module):
def __init__(self):
self.linear_layer = nn.Linear(...)
self.quantum_layer = classiq.QLayer(...)
def forward(self, x: Tensor):
x = self.linear_layer(x)
x = self.quantum_layer(x)
return x
```

The full declaration of the `QLayer`

object, with explanations about the parameter it gets, are described here
In short, a quantum layer takes in a parametric circuit, an execution function (which handles both the quantum execution and classical post-processing), and a few optional parameters.
Behind the scenes, the `QLayer`

does the following

- Process the PQC
- Initializes and tracking parameters
- Passes the inputs and weights (as multi dimensional tensors) to the execution function
- handles gradient calculation on the PQC

Further explanation about `QLayer`

is available here

### Datasets¶

Classiq provides 2 very simple datasets for playtesting with examples.

The first, named "NOT", takes in a single-qubit state (either |0> or |1>), and returns an \(n\)-qubit state of all-ones or all-zeros, respectively.
For example, for \(n=2\): `0 -> |11>`

, `1 -> |00>`

.

The second, named "XOR", takes in an \(n\)-qubit state, and returns a single classical bit, equal to the bitwise-xor of all the bits from the input state.
For example: `101 -> 0`

, `10101 -> 1`

, `10 -> 1`

, `11 -> 0`

.

Further explanation about the datasets is available here

### Gradients¶

Classiq has a way of automatically calculating the gradient of a PQC. And there are many more ways soon to come. Stay tuned!

### A full example¶

A full working example can be found here.