Skip to content

Rainbow options with Integration

View on GitHub Experiment in the IDE
In this Notebook we will go through the implementation of the Integration Method for the rainbow option presented in [1]

Introduction

In finance, a crucial aspect of asset pricing pertains to derivatives. Derivatives are contracts whose value is contingent upon another source, known as the underlying. The pricing of options, a specific derivative instrument, involves determining the fair market value (discounted payoff) of contracts affording their holders the right, though not the obligation, to buy (call) or sell (put) one or more underlying assets at a predefined strike price by a specified future expiration date (maturity date). This process relies on mathematical models, considering variables like current asset prices, time to expiration, volatility, and interest rates.

Data Definitions

The problem inputs are:

  • NUM_QUBITS: the number of qubits representing an underlying asset

  • NUM_ASSETS: the number of underlying assets

  • K: the strike price

  • S0: the arrays of underlying assets prices

  • dt: the number of days to the maturity date

  • COV: the covariance matrix that correlate the underlyings

  • MU_LOG_RET: the array containing the mean of the log return of each underlyings

import numpy as np
import scipy

NUM_QUBITS = 2
NUM_ASSETS = 2

K = 190
S0 = [193.97, 189.12]
dt = 250

COV = np.array([[0.000335, 0.000257], [0.000257, 0.000418]])
MU_LOG_RET = np.array([0.00050963, 0.00062552])
MU = MU_LOG_RET * dt
CHOLESKY = np.linalg.cholesky(COV) * np.sqrt(dt)
SCALING_FACTOR = 1 / CHOLESKY[0, 0]
from classiq import *

EPSILON_VALUE = 0.05
ALPHA_VALUE = 0.1
EPSILON = QConstant("EPSILON", float, EPSILON_VALUE)
ALPHA = QConstant("ALPHA", float, ALPHA_VALUE)

Gaussian State preparation

Encode the probability distribution of a discrete multivariate random variable \(W\) taking values in \(\{w_0, .., w_{N-1}\}\) describing the assets' prices at the maturity date. The number of discretized values, denoted as \(N\), depends on the precision of the state preparation module and is consequently connected to the number of qubits (\(n=\)NUM_QUBITS) according to the formula \(N=2^n\).

\[\sum_{i=0}^{N-1} \sqrt{p(w_i)}\left|w_i\right\rangle\]
def gaussian_discretization(num_qubits, mu=0, sigma=1, stds_around_mean_to_include=3):
    lower = mu - stds_around_mean_to_include * sigma
    upper = mu + stds_around_mean_to_include * sigma
    num_of_bins = 2**num_qubits
    sample_points = np.linspace(lower, upper, num_of_bins + 1)

    def single_gaussian(x: np.ndarray, _mu: float, _sigma: float) -> np.ndarray:
        cdf = scipy.stats.norm.cdf(x, loc=_mu, scale=_sigma)
        return cdf[1:] - cdf[0:-1]

    non_normalized_pmf = (single_gaussian(sample_points, mu, sigma),)
    real_probs = non_normalized_pmf / np.sum(non_normalized_pmf)
    return sample_points[:-1], real_probs[0].tolist()


grid_points, probabilities = gaussian_discretization(NUM_QUBITS)

STEP_X = grid_points[1] - grid_points[0]
MIN_X = grid_points[0]

SANITY CHECK

The process must be stopped if the strike price \(K\) is greater than the maximum value reacheable by the assets during the simulation, to avoid meaningless results. The payoff is \(0\) in this case, so there is no need to simulate.

from IPython.display import Markdown

if K >= max(S0 * np.exp(np.dot(CHOLESKY, [grid_points[-1]] * 2) + MU)):
    display(
        Markdown(
            "<font color='red'> K always greater than the maximum asset values. Stop the run, the payoff is 0</font>"
        )
    )

Maximum Computation

Precision utils

FRAC_PLACES = 2


def round_factor(a):
    precision_factor = 2**FRAC_PLACES
    return round(a * precision_factor) / precision_factor


def floor_factor(a):
    precision_factor = 2**FRAC_PLACES
    return np.floor(a * precision_factor) / precision_factor

Affine and maximum arithmetic definitions

from functools import reduce

from classiq.qmod.symbolic import max as qmax

a = STEP_X / SCALING_FACTOR
b = np.log(S0[0]) + MU[0] + MIN_X * CHOLESKY[0].sum()
c = (
    SCALING_FACTOR
    * (
        np.log(S0[1])
        + MU[1]
        - (np.log(S0[0]) + MU[0])
        + MIN_X * sum(CHOLESKY[1] - CHOLESKY[0])
    )
    / STEP_X
)
c = round_factor(c)


def get_affine_formula(assets, i):
    return reduce(
        lambda x, y: x + y,
        [
            assets[j] * round_factor(SCALING_FACTOR * CHOLESKY[i, j])
            for j in range(NUM_ASSETS)
            if CHOLESKY[i, j]
        ],
    )


def calculate_max_reg_type():
    x1 = QNum("x1", NUM_QUBITS, False, 0)
    x2 = QNum("x2", NUM_QUBITS, False, 0)
    expr = qmax(get_affine_formula([x1, x2], 0), get_affine_formula([x1, x2], 1) + c)
    size_in_bits, sign, fraction_digits = get_expression_numeric_attributes(
        [x1, x2], expr
    )
    return size_in_bits, fraction_digits


MAX_NUM_QUBITS = calculate_max_reg_type()[0]
MAX_FRAC_PLACES = calculate_max_reg_type()[1]
@qfunc
def affine_max(x1: QNum, x2: QNum, res: Output[QNum]):
    res |= qmax(get_affine_formula([x1, x2], 0), get_affine_formula([x1, x2], 1) + c)

Integration Method

The comparator collects the probabilities \(g(r)\) of \(|r\rangle\) state until \(|r\rangle\) register is lower than \(|x\rangle\): \begin{equation} \begin{split} &\sum_{r=0}^{2^R-1}{\sqrt{g(r)}}|x\rangle|r\rangle|r\leq x\rangle \ = &|x\rangle \otimes \left[ \sum_{r=0}^{x}{\sqrt{g(r)}} |r\rangle |1\rangle + \sum_{r=x}^{2^R-1}{\sqrt{g(r)}} |r\rangle |0\rangle \right] \end{split} \end{equation} Collecting the probability to have \(r\leq x\) we can define the function: \begin{equation} \tilde{h}(x)=\sum_{r=0}^{x}g(r) \end{equation} Evaluating the probability to get a \(|1\rangle\) results in \(\sum_{x = 0}^{2^R-1}{\tilde{h}(x)}\). To obtain a given function \(\tilde{h}\) a proper function \(g(r)\) should be chosen. The \(g(r)\) for \(r=0\) value must therefore be $ g(0) = \tilde{h}(0) $ and for all the other \(r\):

\[g(r) = \tilde{h}(r)-\tilde{h}(r-1)\]
@qfunc
def integrator(x: QNum, ref: QNum, res: QBit) -> None:
    exp_rate = (1 / (2**x.fraction_digits)) * a
    prepare_exponential_state(-exp_rate, ref)
    res ^= x >= ref
from classiq.qmod.symbolic import asin, exp, sqrt


def get_strike_price_theta_integration(x: QNum):
    exp_rate = (1 / (2**x.fraction_digits)) * a
    B = (exp((2**x.size) * exp_rate) - 1) / exp(exp_rate)
    A = 1 / exp(exp_rate)
    C = S0[0] * exp((MU[0] + MIN_X * CHOLESKY[0].sum()))
    return 2 * asin(sqrt((K - (C * A)) / (C * B)))


@qfunc
def integration_load_amplitudes(
    geq_reg: QNum, max_reg: QNum, integrator_reg: QNum, ind_reg: QBit
):
    control(geq_reg == 1, lambda: integrator(max_reg, integrator_reg, ind_reg))
    strike_price_theta = get_strike_price_theta_integration(max_reg)
    control(geq_reg == 0, lambda: RY(strike_price_theta, ind_reg))
@qfunc
def asset_geq_strike_price(
    x: QNum,
    res: Output[QBit],
) -> None:
    a = STEP_X / SCALING_FACTOR
    b = np.log(S0[0]) + MU[0] + MIN_X * CHOLESKY[0].sum()
    COMP_VALUE = (np.log(K) - b) / a
    res |= x > floor_factor(COMP_VALUE)
from classiq import within_apply


@qfunc
def integration_payoff(max_reg: QNum, integrator_reg: QNum, ind_reg: QBit):
    geq_reg = QBit("geq_reg")
    within_apply(
        lambda: asset_geq_strike_price(max_reg, geq_reg),
        lambda: integration_load_amplitudes(geq_reg, max_reg, integrator_reg, ind_reg),
    )
class EstimationVars(QStruct):
    x1: QNum[NUM_QUBITS, False, 0]
    x2: QNum[NUM_QUBITS, False, 0]
    integrator: QNum[MAX_NUM_QUBITS, False, MAX_FRAC_PLACES]
    ind: QBit


@qfunc
def rainbow_integration(qvars: EstimationVars) -> None:
    inplace_prepare_state(probabilities, 0, qvars.x1)
    inplace_prepare_state(probabilities, 0, qvars.x2)
    max_out = QNum("max_out")
    within_apply(
        lambda: affine_max(qvars.x1, qvars.x2, max_out),
        lambda: integration_payoff(max_out, qvars.integrator, qvars.ind),
    )


@qfunc
def main(qvars: Output[EstimationVars]) -> None:
    allocate(qvars.size, qvars)
    rainbow_integration(qvars)


constraints = Constraints(max_width=23)
qmod = create_model(main, constraints=constraints)
print("Starting synthesis")
qprog = synthesize(qmod)
show(qprog)
Starting synthesis

IQAE algorithm

from classiq.qmod.builtins.classical_execution_primitives import iqae, save


@qfunc
def qmci_oracle(qvars: EstimationVars):
    Z(qvars.ind)


@cfunc
def cmain():
    iqae_res = iqae(epsilon=EPSILON, alpha=ALPHA)
    save({"iqae_res": iqae_res})
from classiq import CInt, QArray, QCallable, grover_operator, power


@qfunc
def grover_algorithm(
    k: CInt,
    oracle_operand: QCallable[QArray[QBit]],
    sp_operand: QCallable[QArray[QBit]],
    x: QArray[QBit],
):
    sp_operand(x)
    power(k, lambda: grover_operator(oracle_operand, sp_operand, x))
def get_main():
    @qfunc
    def main(
        k: CInt,
        ind: Output[QBit],
    ) -> None:
        qvars = EstimationVars("qvars")
        allocate(qvars.size, qvars)
        grover_algorithm(
            k,
            qmci_oracle,
            rainbow_integration,
            qvars,
        )
        state = QArray("state")
        bind(qvars, [state, ind])

    return main
from classiq import execute, write_qmod


def synthesize_and_execute(post_process):
    constraints = Constraints(max_width=25)
    qmod = create_model(
        get_main(),
        constraints=constraints,
        classical_execution_function=cmain,
    )
    write_qmod(qmod, "rainbow_options_integration_method")
    print("Starting synthesis")
    qprog = synthesize(qmod)
    show(qprog)
    print("Starting execution")
    res = execute(qprog).result()
    iqae_res = res[0].value
    print("raw iqae results:", iqae_res.estimation, iqae_res.confidence_interval)
    parsed_res = post_process(res[0].value)

    return (qmod, qprog, iqae_res, parsed_res)

Post Process

We need to add to the post-processing function a term:

\[\begin{equation} \begin{split} \mathbb{E} \left[\max\left(\frac{e^{a(x+1)} - 1}{e^{a(x_{max} +1)}-1}c + \frac{1}{e^a} , Ke^{-b'}\right)\right] e^{b'} - K \\ =\mathbb{E} \left[\max\left(\frac{e^{a(x+1)} - 1}{e^{a(x_{max} +1)}-1}, \frac{Ke^{-b'}}{c} - \frac{e^{-a}}{c}\right)\right]ce^{b'} + e^{b'}e^{-a} - K \end{split} \end{equation}\]
exp_rate = (1 / (2**MAX_FRAC_PLACES)) * a
B = (np.exp((2**MAX_NUM_QUBITS) * exp_rate) - 1) / np.exp(exp_rate)
A = 1 / np.exp(exp_rate)
C = S0[0] * np.exp((MU[0] + MIN_X * CHOLESKY[0].sum()))


def parse_result_integration(iqae_res):
    option_value = (iqae_res.estimation * (C * B)) + (C * A) - K
    confidence_interval = (
        (np.array(iqae_res.confidence_interval) * (C * B)) + (C * A) - K
    )
    return (option_value, confidence_interval)

Run method

qmod_integration, qprog_integration, iqae_res_integration, parsed_res_integration = (
    synthesize_and_execute(parse_result_integration)
)
result, conf_interval = parsed_res_integration
print(
    f"raw iqae results: {iqae_res_integration.estimation} with confidence interval {iqae_res_integration.confidence_interval}"
)
print(f"option estimated value: {result} with confidence interval {conf_interval}")
Starting synthesis
Starting execution
raw iqae results: 0.051301905160993394 (0.04602614017739099, 0.056577670144595804)
raw iqae results: 0.051301905160993394 with confidence interval (0.04602614017739099, 0.056577670144595804)
option estimated value: 25.863852335039155 with confidence interval [12.19669021 39.53101446]

Assertions

expected_payoff = 23.0238
ALPHA_ASSERTION = 1e-5
measured_confidence = conf_interval[1] - conf_interval[0]
confidence_scale_by_alpha = np.sqrt(
    np.log(ALPHA_VALUE / ALPHA_ASSERTION)
)  # based on e^2=(1/2N)*log(2T/alpha) from "Iterative Quantum Amplitude Estimation" since our alpha is low, we want to check within a bigger confidence interval
assert (
    np.abs(result - expected_payoff)
    <= 0.5 * measured_confidence * confidence_scale_by_alpha
), f"Payoff result is out of the {ALPHA_ASSERTION*100}% confidence interval: |{result} - {expected_payoff}| > {0.5*measured_confidence * confidence_scale_by_alpha}"

References

[1]: Francesca Cibrario et al., Quantum Amplitude Loading for Rainbow Options Pricing. Preprint