The key methodological innovations of the proposed G-LSM method include:
Using sparse Hermite polynomial space with a hyperbolic cross index set as the ansatz space for approximating the continuation value functions (CVFs). This allows for efficient computation of gradients with nearly no extra cost.
Incorporating the gradient information for computing the expansion coefficients by solving a linear least squares problem, which differs from the projection-based approach in the standard least squares Monte Carlo (LSM) method.
The authors analyze the convergence of G-LSM using BSDE techniques, stochastic and Malliavin calculus, and establish an error bound in terms of time step size, statistical error of the Monte Carlo approximation, and the best approximation error in weighted Sobolev space.
Numerical experiments show that G-LSM outperforms the state-of-the-art LSM method in terms of accuracy for prices, Greeks, and optimal exercise strategies, with nearly identical computational cost. It can also deliver comparable results with recent neural network-based methods up to 100 dimensions.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Jiefei Yang,... at arxiv.org 05-07-2024
https://arxiv.org/pdf/2405.02570.pdfDeeper Inquiries