Rainbow Options with Direct Amplitude Loading
This notebook covers the implementation of the Direct Amplitude Loading Method for the rainbow option presented in [1].
In finance, a crucial aspect of asset pricing pertains to derivatives. Derivatives are contracts whose value is contingent upon another source, known as the underlying. The pricing of options—a specific derivative instrument—involves determining the fair market value (discounted payoff) of contracts that afford their holders the right, though not the obligation, to buy (call) or sell (put) one or more underlying assets at a predefined strike price by a specified future expiration date (maturity date). This process relies on mathematical models, considering variables such as current asset prices, time to expiration, volatility, and interest rates.
Data Definitions
The problem inputs:
-
NUM_QUBITS: the number of qubits representing an underlying asset -
NUM_ASSETS: the number of underlying assets -
K: the strike price -
S0: the arrays of underlying asset prices -
dt: the number of days to the maturity date -
COV: the covariance matrix that correlates to the underlyings -
MU_LOG_RET: the array containing the mean of the log return of each underlying
import numpy as np
import scipy
NUM_QUBITS = 2
NUM_ASSETS = 2
K = 190
S0 = [193.97, 189.12]
dt = 250
COV = np.array([[0.000335, 0.000257], [0.000257, 0.000418]])
MU_LOG_RET = np.array([0.00050963, 0.00062552])
MU = MU_LOG_RET * dt
CHOLESKY = np.linalg.cholesky(COV) * np.sqrt(dt)
SCALING_FACTOR = 1 / CHOLESKY[0, 0]
from classiq import *
EPSILON = 0.05
ALPHA = 0.1
Gaussian State Preparation
Encode the probability distribution of a discrete multivariate random variable \(W\) taking values in \(\{w_0, .., w_{N-1}\}\) describing the asset prices at the maturity date. The number of discretized values, denoted as \(N\), depends on the precision of the state preparation module and is consequently connected to the number of qubits (\(n=\)) according to the formula \(N=2^n\):
def gaussian_discretization(num_qubits, mu=0, sigma=1, stds_around_mean_to_include=3):
lower = mu - stds_around_mean_to_include * sigma
upper = mu + stds_around_mean_to_include * sigma
num_of_bins = 2**num_qubits
sample_points = np.linspace(lower, upper, num_of_bins + 1)
def single_gaussian(x: np.ndarray, _mu: float, _sigma: float) -> np.ndarray:
cdf = scipy.stats.norm.cdf(x, loc=_mu, scale=_sigma)
return cdf[1:] - cdf[0:-1]
non_normalized_pmf = (single_gaussian(sample_points, mu, sigma),)
real_probs = non_normalized_pmf / np.sum(non_normalized_pmf)
return sample_points[:-1], real_probs[0].tolist()
grid_points, probabilities = gaussian_discretization(NUM_QUBITS)
STEP_X = grid_points[1] - grid_points[0]
MIN_X = grid_points[0]
Sanity Check
To avoid meaningless results, the process must stop if the strike price \(K\) is greater than the maximum value reacheable by the assets during the simulation. In this case, the payoff is \(0\), so there is no need to simulate:
from IPython.display import Markdown
if K >= max(S0 * np.exp(np.dot(CHOLESKY, [grid_points[-1]] * 2) + MU)):
display(
Markdown(
"<font color='red'> K always greater than the maximum asset values. Stop the run, the payoff is 0</font>"
)
)
Maximum Computation
Precision Utils
FRAC_PLACES = 2
def round_factor(a):
precision_factor = 2**FRAC_PLACES
return round(a * precision_factor) / precision_factor
def floor_factor(a):
precision_factor = 2**FRAC_PLACES
return np.floor(a * precision_factor) / precision_factor
Affine and Maximum Arithmetic Definitions
Considering the time delta between the starting date (\(t_0\)) and the maturity date (\(t\)), express the return value \(R_i\) for the \(i\)-th asset as \(R_i = \mu_i + y_i\) where
\(\mu_i= (t-t_0)\tilde{\mu}_i\), being \(\tilde{\mu}_i\) the expected daily log-return value. It can be estimated by considering the historical time series of log returns for the \(i\)-th asset.
\(y_i\) is obtained through the dot product between the matrix \(\mathbf{L}\) and the standard multivariate Gaussian sample:
\(\Delta x\) is the Gaussian discretization step, \(x_{min}\) is the lower Gaussian truncation value, and \(d_k \in [0,2^m-1]\) is the sample taken from the \(k\)-th standard Gaussian. \(l_{ik}\) is the \(i,k\) entry of the matrix \(\mathbf{L}\), defined as \(\mathbf{L}=\mathbf{C}\sqrt{(t-t_0)}\), where \(\mathbf{C}\) is the lower triangular matrix obtained by applying the Cholesky decomposition to the historical daily log-return correlation matrix:
from functools import reduce
from classiq.qmod.symbolic import max as qmax
a = STEP_X / SCALING_FACTOR
b = np.log(S0[0]) + MU[0] + MIN_X * CHOLESKY[0].sum()
def get_affine_formula(assets, i):
return reduce(
lambda x, y: x + y,
[
assets[j] * round_factor(SCALING_FACTOR * CHOLESKY[i, j])
for j in range(NUM_ASSETS)
if CHOLESKY[i, j]
],
)
c = (
SCALING_FACTOR
* (
np.log(S0[1])
+ MU[1]
- (np.log(S0[0]) + MU[0])
+ MIN_X * sum(CHOLESKY[1] - CHOLESKY[0])
)
/ (STEP_X)
)
c = round_factor(c)
def calculate_max_reg_type():
x1 = QNum(size=NUM_QUBITS)
x2 = QNum(size=NUM_QUBITS)
expr = qmax(get_affine_formula([x1, x2], 0), get_affine_formula([x1, x2], 1) + c)
size_in_bits, sign, fraction_digits = get_expression_numeric_attributes(
[x1, x2], expr
)
return size_in_bits, fraction_digits
MAX_NUM_QUBITS = calculate_max_reg_type()[0]
MAX_FRAC_PLACES = calculate_max_reg_type()[1]
@qperm
def affine_max(x1: Const[QNum], x2: Const[QNum], res: Output[QNum]):
res |= qmax(get_affine_formula([x1, x2], 0), get_affine_formula([x1, x2], 1) + c)
Direct Method
The direct exponential amplitude loading encodes in \(\tilde{f}\) the following function: \(\begin{equation}\tilde{f}(x)= \begin{cases} e^{-a\hat{x}}, & \text{if } \frac{x}{2^P} \geq \frac{\log(K) -b'}{b}\\ Ke^{-(b'+ ax_{max})}, & \text{if } \frac{x}{2^P} < \frac{\log(K) -b'}{b} \end{cases}\end{equation}\)
where \(\hat{x}\) is the binary complement of \(x\) (\(\hat{x}=x-x_{max}\)) and \(x_{max}=2^R-1\), the maximum value that can be stored in the \(|x\rangle\) register. For loading \(e^{-a\hat{x}}\), the \(|r\rangle\) is initialized to all zeros. One controlled rotation for each qubit is performed. The rotation angles are \(\theta_i = 2\arccos \left({\sqrt{e^{-a2^i}}}\right)\). All the probabilities of getting a \(|0\rangle^{\otimes{R}}\) in the \(|r\rangle\) are then collected by a multi-controlled X (MCX) gate and stored in the \(|1\rangle\) state of a target qubit.
from classiq.qmod.symbolic import acos, asin, exp, sqrt
@qfunc
def exponential_amplitude_loading(
exp_rate: CReal, x: Const[QArray[QBit]], aux: QArray[QBit], res: QBit
) -> None:
within_apply(
lambda: apply_to_all(X, x),
lambda: repeat(
x.len,
lambda index: control(
x[index],
lambda: RY(2 * acos(1 / sqrt(exp(exp_rate * (2**index)))), aux[index]),
),
),
)
aux_num = QNum()
within_apply(lambda: bind(aux, aux_num), lambda: inplace_xor(aux_num == 0, res))
class EstimationVars(QStruct):
x1: QNum[NUM_QUBITS]
x2: QNum[NUM_QUBITS]
aux: QNum[MAX_NUM_QUBITS]
def get_payoff_expression(x, size, fraction_digits):
payoff = sqrt(
qmax(
S0[0]
* exp(
STEP_X / SCALING_FACTOR * (2 ** (size - fraction_digits)) * x
+ (MU[0] + MIN_X * CHOLESKY[0].sum())
),
K,
)
)
return payoff
def get_strike_price_theta_direct(x: QNum):
x_max = 1 - 1 / (2**x.size)
payoff_max = get_payoff_expression(x_max, x.size, x.fraction_digits)
return 2 * asin(np.sqrt(K) / payoff_max)
# this is not a qfunc, just a utility function
def is_geq_strike_price(
x: Const[QNum],
) -> None:
a = STEP_X / SCALING_FACTOR
b = np.log(S0[0]) + MU[0] + MIN_X * CHOLESKY[0].sum()
COMP_VALUE = (np.log(K) - b) / a
return x > floor_factor(COMP_VALUE)
@qfunc
def direct_payoff(max_reg: Const[QNum], aux_reg: QNum, ind_reg: QBit):
exp_rate = (1 / (2**max_reg.fraction_digits)) * a
control(
is_geq_strike_price(max_reg),
lambda: exponential_amplitude_loading(exp_rate, max_reg, aux_reg, ind_reg),
lambda: RY(get_strike_price_theta_direct(max_reg), ind_reg),
)
@qfunc
def rainbow_direct(qvars: EstimationVars, ind: QBit) -> None:
inplace_prepare_state(probabilities, 0, qvars.x1)
inplace_prepare_state(probabilities, 0, qvars.x2)
max_out = QNum()
within_apply(
lambda: affine_max(qvars.x1, qvars.x2, max_out),
lambda: direct_payoff(max_out, qvars.aux, ind),
)
@qfunc
def main(qvars: Output[EstimationVars], ind: Output[QBit]) -> None:
allocate(qvars)
allocate(ind)
rainbow_direct(qvars, ind)
MAX_WIDTH = 24
qmod = create_model(
main,
constraints=Constraints(max_width=MAX_WIDTH),
preferences=Preferences(optimization_level=1),
)
print("Starting synthesis")
qprog = synthesize(qmod)
show(qprog)
Starting synthesis
Quantum program link: https://platform.classiq.io/circuit/31azX7IWM0iqgTtx09EMoHWV7DQ
Iterative Quantum Amplitude Estimation (IQAE) Algorithm
from classiq.applications.iqae.iqae import IQAE
MAX_WIDTH_2 = 25
iqae = IQAE(
state_prep_op=rainbow_direct,
problem_vars_size=NUM_QUBITS * NUM_ASSETS + MAX_NUM_QUBITS,
constraints=Constraints(max_width=MAX_WIDTH_2),
preferences=Preferences(optimization_level=1),
)
qmod_2 = iqae.get_model()
write_qmod(qmod_2, "rainbow_options_direct_method")
print("Starting synthesis")
qprog_2 = iqae.get_qprog()
show(qprog_2)
print("Starting execution")
result = iqae.run(EPSILON, ALPHA)
Starting synthesis
Quantum program link: https://platform.classiq.io/circuit/31azbD4kPOoiuFS04Nhft8rxsHH
Starting execution
Post-process
Add a term to the post-processing function: \(\begin{equation}\begin{split} &\mathbb{E} \left[\max\left(e^{b \cdot z}, Ke^{-b'}\right) \right] e^{b'} - K \\ = &\mathbb{E} \left[\max\left(e^{-a\hat{x}}, Ke^{-b'-ax_{max}}\right) \right]e^{b'+ ax_{max}} - K \end{split}\end{equation}\)
import sympy
payoff_expression = f"sqrt(max([{S0[0]} * exp({STEP_X / SCALING_FACTOR * (2 ** (MAX_NUM_QUBITS - MAX_FRAC_PLACES))} * x + ({MU[0]+MIN_X*CHOLESKY[0].sum()})), {K}]))"
payoff_func = sympy.lambdify(sympy.symbols("x"), payoff_expression)
payoff_max = payoff_func(1 - 1 / (2**MAX_NUM_QUBITS))
def parse_result_direct(iqae_res):
option_value = iqae_res.estimation * (payoff_max**2) - K
confidence_interval = np.array(iqae_res.confidence_interval) * (payoff_max**2) - K
return (option_value, confidence_interval)
Run Method
parsed_result, conf_interval = parse_result_direct(result)
print(
f"raw iqae results: {result.estimation} with confidence interval {result.confidence_interval}"
)
print(
f"option estimated value: {parsed_result} with confidence interval {conf_interval}"
)
raw iqae results: 0.08038851110264703 with confidence interval [0.07825324532910591, 0.08252377687618816]
option estimated value: 24.920207288838697 with confidence interval [19.21153379 30.62888078]
Assertions
expected_payoff = 23.0238
ALPHA_ASSERTION = 1e-5
measured_confidence = conf_interval[1] - conf_interval[0]
confidence_scale_by_alpha = np.sqrt(
np.log(ALPHA / ALPHA_ASSERTION)
) # based on e^2=(1/2N)*log(2T/alpha) from "Iterative Quantum Amplitude Estimation" since our alpha is low, we want to check within a bigger confidence interval
assert (
np.abs(parsed_result - expected_payoff)
<= 0.5 * measured_confidence * confidence_scale_by_alpha
), f"Payoff result is out of the {ALPHA_ASSERTION*100}% confidence interval: |{parsed_result} - {expected_payoff}| > {0.5*measured_confidence * confidence_scale_by_alpha}"
References
[1] Francesca Cibrario et al., Quantum Amplitude Loading for Rainbow Options Pricing. Preprint.