In computational finance it is very important to be able to compute price sensitivities or 'the Greeks' for hedging and model calibration. Adjoint methods are a well-established mathematical approach for efficiently computing sensitivities when there are multiple input parameters, but only one output quantity. In this case, the computational cost is similar to the original pricing calculation, whereas the standard linear sensitivity approach would have a cost proportional to the number of inputs.
In this one-day course, we will discuss the mathematical foundations for adjoints methods, algorithmic differentiation (AD) as a general computational technique for the efficient calculation of price sensitivities, and the use of AD software as a way to generate the adjoint code. We concentrate on its application to Monte Carlo methods for SDEs, but also cover finite difference methods for PDEs. Calibration is considered as a target application which can benefit tremendously from the use of adjoint AD (AAD).
Practical examples/exercises will be based on techniques for hand-coding adjoint implementations and the AD software tool dco (derivative code by overloading) for C/C++. We discuss the implementation of check-pointing methods for handling the often prohibitive memory requirement of adjoint code. The specific structure of ensembles (found in Monte Carlo methods for SDEs) and evolutions (found in finite difference methods for PDEs) needs to be exploited. More generally, adjoint code design patterns are proposed to obtain efficient, robust, and scalable adjoint code.