Tired of running stochastic scenarios for your technical provisions? You can probably do with a lot fewer

Two weeks ago, I wrote up a little piece on three ingredients to make Solvency II Technical Provisions – and a lot of similar problems – easier. One of those was to choose your random numbers right, which sounds like a contradiction in terms. It turns out, you can do some neat tricks without ruining the validity of your solution, and efficiently improve the precision of your results. These tricks go under the name of variance reduction techniques. But first, let me frame the question a bit.

What is the precision of a stochastic estimate?

For pricing complex products, such as the Solvency II Best Estimate provisions, you have to estimate the mean net present value of cashflows under the risk neutral measure. In practice, this is done by simulating a large number of independent and identically distributed scenarios from the risk neutral measure, computing the net present value for each scenario, and taking the average.

The problem is, that you potentially have to run a lot of scenarios for the average to be close enough to the mathematical mean. The error you make by running a non-infinite number of scenarios, n, typically scales as n-1/2. In other words, if you want to cut the error in half, you have to run four times as many scenarios.

As an example, I have taken the pricing of a portfolio of investment contracts with annual rebalancing to 50 % equities, and which have a range of guaranteed minimum pay-outs after 20 years. The value of the investment portfolio is 100 and the total value of the guarantees is around 16. I perform simulations of this, and as the number of scenarios increases, the error goes down, but slowly.

At 100,000 scenarios, the error (standard deviation) in the valuation of the guarantees is around 0.3 pct. Whether this is good enough, is a question for another day, but running 100,000 scenarios will often be a major technical obstacle for real portfolios. How can we bring that number down significantly?

Number one: Antithetic variates

This technique is a classic, and should be in your toolbox no matter what. It is relatively easy to implement, and in almost all cases it will improve your precision substantially.

The trick is to draw the scenarios in pairs that are ‘opposite’ to each other, hence the name, antithetic. In most cases, this will lead to net present values that are also ‘opposite’ each other in the distribution of net present values, and the average of the two will be closer to the mean you seek.

In my example, equity returns in each scenario are governed by a stochastic vector, z, of normally distributed numbers. The negative of the previous ones, –z, follows the same distribution as z, so there is no problem in using it in the next scenario. Running the scenarios in this paired way, reduces the error significantly.

In this example, the error goes down by a factor of 2.4. If I were content with the error at 100,000 scenarios without antithetic variates, I can do with 18,000 scenarios with antithetic variables.

The effect you get depends a lot on the problem at hand, but antithetic variables work well when you need to value something that has a monotonic dependence on the underlying randomness. It often does.

Number 2: Control variates

This technique is about using all the information you have in an active way. In my example, I have equities that follow a Black-Scholes process which has some well-known analytical results about pricing of futures and options. Then, I can rewrite the value of the guarantee as a linear combination of these and of a remainder, which is hopefully easier to tame. Schematically, I have the following relationship if I choose constants k1 and k2

Guarantee = k1 * option + k2 * future + remainder

This formula should be understood in two ways. First in the sense of payoffs: in each scenario, I know the payoffs of the guarantee, the option, and the future. The remainder is just what is left. The second is in the sense of values: the value of the guarantee is a linear combination of values of the option and the future (which are known analytically), and the remainder, which I just backed out of the simulation.

Minimising the error is equivalent to minimising the variance of the remainder, which is conveniently done by choosing k1 and k2 by an ordinary least-squares regression.

In my example, it makes sense to use the 20-year equity future and the at-the-money 20-year equity future option. Compared to the naïve estimate, the speedup is a notch better than for antithetic variates, reducing the number of scenarios by a factor of 9.

As a bonus, this procedure will ensure that the option and the future are always priced according to their analytical formulas, which is central to a consistent, arbitrage-free pricing.

Where to go from here?

While antithetic and control variates are the two most common variance reduction techniques, there are more to choose from, but they inevitably require your problem to have some particular feature to work well. I will again recommend you to read Glasserman’s Monte Carlo Methods in Financial Engineering.

The example presented here is obviously fictitious, but has not been particularly tweaked to make the techniques look better. Details and code are available on request.

 

Leave a Reply

Your email address will not be published. Required fields are marked *